0% found this document useful (0 votes)
44 views331 pages

Lundquist 2011

The dissertation by Christian Lundquist focuses on sensor fusion for automotive applications, emphasizing the importance of mapping stationary objects and tracking moving targets for vehicle autonomy. It explores the integration of radar, laser, and camera data to enhance the accuracy of road geometry estimates and proposes novel methods for tracking extended targets and estimating tire radii. The research findings are validated using real data from Swedish roads, contributing to advancements in vehicle safety systems.

Uploaded by

abdihossein2212
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views331 pages

Lundquist 2011

The dissertation by Christian Lundquist focuses on sensor fusion for automotive applications, emphasizing the importance of mapping stationary objects and tracking moving targets for vehicle autonomy. It explores the integration of radar, laser, and camera data to enhance the accuracy of road geometry estimates and proposes novel methods for tracking extended targets and estimating tire radii. The research findings are validated using real data from Swedish roads, contributing to advancements in vehicle safety systems.

Uploaded by

abdihossein2212
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 331

Linköping studies in science and technology. Dissertations.

No. 1409

Sensor Fusion for Automotive


Applications

Christian Lundquist

Department of Electrical Engineering


Linköping University, SE–581 83 Linköping, Sweden

Linköping 2011
Cover illustration: The intensity map describes the density of stationary targets
along the road edges. A photo of the the driver’s view in the green car is shown
in Figure 5a on Page 216.

Linköping studies in science and technology. Dissertations.


No. 1409

Sensor Fusion for Automotive Applications

Christian Lundquist

[email protected]
www.control.isy.liu.se
Division of Automatic Control
Department of Electrical Engineering
Linköping University
SE–581 83 Linköping
Sweden

ISBN 978-91-7393-023-9 ISSN 0345-7524

Copyright © 2011 Christian Lundquist

Printed by LiU-Tryck, Linköping, Sweden 2011


To Nadia
Abstract
Mapping stationary objects and tracking moving targets are essential for many
autonomous functions in vehicles. In order to compute the map and track esti-
mates, sensor measurements from radar, laser and camera are used together with
the standard proprioceptive sensors present in a car. By fusing information from
different types of sensors, the accuracy and robustness of the estimates can be
increased.
Different types of maps are discussed and compared in the thesis. In particular,
road maps make use of the fact that roads are highly structured, which allows rel-
atively simple and powerful models to be employed. It is shown how the informa-
tion of the lane markings, obtained by a front looking camera, can be fused with
inertial measurement of the vehicle motion and radar measurements of vehicles
ahead to compute a more accurate and robust road geometry estimate. Further,
it is shown how radar measurements of stationary targets can be used to estimate
the road edges, modeled as polynomials and tracked as extended targets.
Recent advances in the field of multiple target tracking lead to the use of finite set
statistics (fisst) in a set theoretic approach, where the targets and the measure-
ments are treated as random finite sets (rfs). The first order moment of a rfs
is called probability hypothesis density (phd), and it is propagated in time with
a phd filter. In this thesis, the phd filter is applied to radar data for construct-
ing a parsimonious representation of the map of the stationary objects around
the vehicle. Two original contributions, which exploit the inherent structure in
the map, are proposed. A data clustering algorithm is suggested to structure the
description of the prior and considerably improving the update in the phd filter.
Improvements in the merging step further simplify the map representation.
When it comes to tracking moving targets, the focus of this thesis is on extended
targets, i.e., targets which potentially may give rise to more than one measure-
ment per time step. An implementation of the phd filter, which was proposed
to handle data obtained from extended targets, is presented. An approximation
is proposed in order to limit the number of hypotheses. Further, a framework to
track the size and shape of a target is introduced. The method is based on mea-
surement generating points on the surface of the target, which are modeled by an
rfs.
Finally, an efficient and novel Bayesian method is proposed for approximating
the tire radii of a vehicle based on particle filters and the marginalization concept.
This is done under the assumption that a change in the tire radius is caused by
a change in tire pressure, thus obtaining an indirect tire pressure monitoring
system.
The approaches presented in this thesis have all been evaluated on real data from
both freeways and rural roads in Sweden.

v
Populärvetenskaplig sammanfattning
Dagens bilar blir säkrare för varje år. Tidigare var det den passiva säkerheten med
bilbälten, krockkuddar och krockzoner som förbättrades. Nu sker de snabbaste
förändringarna inom aktiv säkerhet, med förarstödsystem såsom antisladd och
nödbromssystem och förarvarningssystem för farliga filbyten och framförvaran-
de hinder.
Förarstödsystem kräver omvärldsuppfattning, och de sensorer för detta som finns
i bruk idag är kamera och radar. Kameran ser t.ex. filmarkeringar och kan använ-
das för att detektera fordon och fotgängare. Radarn är mycket bra på att detekte-
ra rörliga objekt och på avståndsbedömning, och används idag för att följa bilen
framför. Radarn ger en mängd information som idag är outnyttjad, och ett syfte
med avhandlingen är att undersöka hur radarinformationen kan användas fullt
ut i framtida säkerhetssystem.
Ett bidrag i avhandlingen är att använda radarns mätningar av stillastående ob-
jekt på och jämte vägen för att bygga upp en lokal karta. Kartan visar områden
som är körbara och hinder som ska undvikas. Ett annat bidrag är att sortera alla
de träffar från rörliga objekt som radarn registrerar och ge en noggrann karta över
hur många andra fordon det finns, samt deras positioner, hastigheter och storle-
kar. Tillsammans bildar dessa två kartor en lägesbild som kan användas i nästa
generations kollisionsundvikande system som kan kombinera broms och styring-
repp för att på ett intelligent, säkert och kontrollerat sätt väja undan i kritiska
nödsituationer.
En annan typ av säkerhetssystem är varningssystem till föraren på förändring-
ar i fordonsdynamiska parametrar som kan försämra köregenskaperna. Exempel
på sådana parametrar är däcktryck, friktion och fellastning. Ett bidrag i avhand-
lingen är ett nytt sätt att skatta en sänkning i däcktryck, genom att kombinera
sensorer i fordonet med satellitnavigering.
De metoder som presenteras i avhandlingen har utvärderats på verkliga data från
bland annat motorvägar och landsvägar i Sverige.

vii
Acknowledgments
It all started in the end of the 80s, when my teacher asked me what I wanted to
become when I grow up. I had only one wish, to study at Chalmers; followed
by my answer my teacher laughed and told me “you will never make it, you are
too stupid". This incentivized me, and encourage by other teachers I managed to
finish school and be admitted at Chalmers. Several years later I’m now here and
I have finished my thesis, something I would never have managed alone, without
the support of many wonderful persons.
I would like to thank my supervisor professor Fredrik Gustafsson for his positive,
inspiring, encouraging and relaxed attitude and my co-supervisor Dr. Thomas B.
Schön for giving me a speedy start into academic research. Whenever I run into
problems Dr. Umut Orguner, who possesses an enormous amount of knowledge
about everything beyond least squares, pushed me in the right direction. Thank
you for your help and for always having time. Further, I would like to acknowl-
edge professor Svante Gunnarsson, who is a skillful head of the group, and his
predecessor professor Lennart Ljung. They have created a wonderful research
atmosphere which makes enjoyable going to office.
Part of the work has been performed with colleagues. I would like to mention Lic.
Karl Granstöm, since one can’t find a better partner to collaborate with. Further
I like to mention Dr. Lars Hammarstrand, at Chalmers, with whom I had very
useful and interesting discussions. Finally, I would like to thank Dr. Emre Özkan,
who opened the Gaussian window and let me see other distributions.
Andreas Andersson at Nira Dynamics AB and Dr. Andreas Eidehall at Volvo Per-
sonvagnar AB have rescued me from the simulation swamp, by supporting me
with measurement data from prototype vehicles. Dr. Gustaf Hendeby and Dr.
Henrik Tidefelt helped me with latex issues, without their support this thesis
would not be in this professional shape. Ulla Salaneck, Åsa Karmelind and Ninna
Stensgård have helped with many practical things during the years in the group.
An important part of the PhD studies is the time spent outside the office bunker,
i.e., in the fika room and in pubs. I would like to thank Lic. Jonas Callmer, Lic.
Karl Granström and Lic. Martin Skoglund for sharing vagnen with me. Further
I would like to thank Lic. Christian Lyzell, Dr. Ragnar Wallin, Dr. Henrik Ohls-
son Lic. Zoran Sjanic and Sina Khoshfetrat Pakazad for being generous and good
friends.
Dr. Wolfgang Reinelt became a mentor for me, he supported and encouraged me
during the time at ZF Lenksysteme GmbH. With him I wrote my first publica-
tions. My former boss Gerd Reimann taught me about vehicle dynamics and the
importance of good experiments; I will always remember Sinuslenken.
One of the most inspiring persons I met is Dr. Peter Bunus. Together with pro-
fessor Fredrik Gustafsson, Dr. David Törnqvist, Lic. Per Skoglar and Lic. Jonas
Callmer we started SenionLab AB. I’m looking forward to keeping on working
with all of them in the near future.

ix
x Acknowledgments

I would like to acknowledge the supported from the SEnsor fusion for Safety
(sefs) project within the Intelligent Vehicle Safety Systems (ivss) program and
the support from the Swedish Research Council under the frame project grant
Extended Target Tracking.
I would never have been able to fulfill my wish to study without the support from
my family, for this I am endlessly thankful. Finally, I would like to thank amore
della mia vita Nadia, who has brought so much love and joy into my life.

Linköping, October 2011


Christian Lundquist
Contents

Notation xvii

I Background
1 Introduction 3
1.1 Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Automotive Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Sensor Fusion for Safety . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Extended Target Tracking . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Components of the Sensor Fusion Framework . . . . . . . . . . . . 7
1.6 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.8 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.8.1 Outline of Part I . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.8.2 Outline of Part II . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Models of Dynamic Systems 21


2.1 Overview of the Models Used in the Thesis . . . . . . . . . . . . . . 22
2.2 Discretizing Continuous-Time Models . . . . . . . . . . . . . . . . 26
2.3 Linear State Space Model . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4 Nonlinear State Space Model with Additive Noise . . . . . . . . . . 28

3 Estimation Theory 31
3.1 Static Estimation Theory . . . . . . . . . . . . . . . . . . . . . . . . 32
3.1.1 Least Squares Estimator . . . . . . . . . . . . . . . . . . . . 33
3.1.2 Probabilistic Point Estimates . . . . . . . . . . . . . . . . . . 35
3.2 Filter Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1 The Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.2 The Extended Kalman Filter . . . . . . . . . . . . . . . . . . 38
3.2.3 The Unscented Kalman Filter . . . . . . . . . . . . . . . . . 39
3.2.4 The Particle Filter . . . . . . . . . . . . . . . . . . . . . . . . 41

xi
xii CONTENTS

4 Target Tracking 45
4.1 Single Target Tracking . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.2 Extension to Multitarget Tracking . . . . . . . . . . . . . . . . . . . 47
4.2.1 Data Association . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2.2 Track Management . . . . . . . . . . . . . . . . . . . . . . . 50
4.3 Extended Target Tracking . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.1 Point Features . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.2 Spatial Distribution . . . . . . . . . . . . . . . . . . . . . . . 53
4.3.3 Elliptical Shaped Target . . . . . . . . . . . . . . . . . . . . 54
4.3.4 Curved Target . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.3.5 Extended Target Tracking and PHD filter . . . . . . . . . . 56

5 PHD Filter and Its Implementation 57


5.1 Introduction to Finite Set Statistics . . . . . . . . . . . . . . . . . . 58
5.1.1 Random Finite Set . . . . . . . . . . . . . . . . . . . . . . . . 59
5.1.2 Belief-Mass and Multitarget Density Function . . . . . . . . 59
5.1.3 The Multitarget Bayes Filter . . . . . . . . . . . . . . . . . . 61
5.2 Introduction to the PHD filter . . . . . . . . . . . . . . . . . . . . . 62
5.2.1 Approximations in Single-Target Tracking . . . . . . . . . . 63
5.2.2 Approximations in Multitarget Tracking . . . . . . . . . . . 63
5.2.3 The PHD Filter . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2.4 Generalizations of the PHD filter . . . . . . . . . . . . . . . 68
5.3 Gaussian Mixture Implementation . . . . . . . . . . . . . . . . . . 70
5.3.1 Gaussian Mixture PHD Approximation . . . . . . . . . . . 71
5.3.2 GM-PHD Filter Algorithm . . . . . . . . . . . . . . . . . . . 72
5.3.3 Merge, Prune and Extract Targets . . . . . . . . . . . . . . . 74

6 Concluding Remarks 77
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.2 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

Bibliography 81

II Publications
A Situational Awareness and Road Prediction for Trajectory Control Ap-
plications 97
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
2 Modeling the Environment with a Map . . . . . . . . . . . . . . . . 100
3 Feature Based Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.1 Radar and Laser . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.2 Cameras and Computer Vision . . . . . . . . . . . . . . . . 104
4 Road Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.1 Road Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.2 Mapping of the Road Lanes . . . . . . . . . . . . . . . . . . 110
4.3 Mapping of the Road Edges . . . . . . . . . . . . . . . . . . 114
CONTENTS xiii

5 Occupancy Grid Map . . . . . . . . . . . . . . . . . . . . . . . . . . 118


5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.2 OGM with Radar Measurements . . . . . . . . . . . . . . . 121
5.3 Experiments and Results . . . . . . . . . . . . . . . . . . . . 121
6 Intensity Based Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

B Joint Ego-Motion and Road Geometry Estimation 133


1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
2 Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
3 Dynamic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
3.1 Geometry and Notation . . . . . . . . . . . . . . . . . . . . 140
3.2 Ego Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
3.3 Road Geometry . . . . . . . . . . . . . . . . . . . . . . . . . 142
3.4 Leading Vehicles . . . . . . . . . . . . . . . . . . . . . . . . 146
3.5 Summarizing the Dynamic Model . . . . . . . . . . . . . . . 147
4 Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . 150
5.1 Parameter Estimation and Filter Tuning . . . . . . . . . . . 151
5.2 Validation Using Ego Vehicle Signals . . . . . . . . . . . . . 152
5.3 Road Curvature Estimation . . . . . . . . . . . . . . . . . . 152
6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

C Extended Target Tracking Using Polynomials With Applications to


Road-Map Estimation 163
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
2 Related Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
2.1 Extended Target Tracking . . . . . . . . . . . . . . . . . . . 167
2.2 Contours and Curves . . . . . . . . . . . . . . . . . . . . . . 168
2.3 Errors In Variables . . . . . . . . . . . . . . . . . . . . . . . 168
2.4 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
3 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4 State Space Representation for an Extended Object . . . . . . . . . 171
4.1 Measurement Model . . . . . . . . . . . . . . . . . . . . . . 171
4.2 Process Model . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5 Multi-Target Tracking Algorithm . . . . . . . . . . . . . . . . . . . 176
5.1 Gating and Data Association . . . . . . . . . . . . . . . . . . 177
5.2 Track Handling . . . . . . . . . . . . . . . . . . . . . . . . . 179
6 Application Example and Use of Prior Information . . . . . . . . . 180
6.1 State Space Model . . . . . . . . . . . . . . . . . . . . . . . . 181
6.2 Using Prior Information in Extended Track Generation . . 181
6.3 Experiments and Results . . . . . . . . . . . . . . . . . . . . 183
7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
xiv CONTENTS

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

D Road Intensity Based Mapping using Radar Measurements with a


PHD Filter 193
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
2 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
3 GM-PHD Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
3.1 Time Evolution . . . . . . . . . . . . . . . . . . . . . . . . . 201
3.2 Measurement Update . . . . . . . . . . . . . . . . . . . . . . 202
4 Joint Clustering and Estimation . . . . . . . . . . . . . . . . . . . . 203
4.1 K-Means Clustering . . . . . . . . . . . . . . . . . . . . . . 203
4.2 Regression Clustering . . . . . . . . . . . . . . . . . . . . . 203
4.3 Road Edge Estimation . . . . . . . . . . . . . . . . . . . . . 205
5 Merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.3 Road Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 208
6 Road Mapping Example . . . . . . . . . . . . . . . . . . . . . . . . 212
6.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 212
6.2 Spawn Process . . . . . . . . . . . . . . . . . . . . . . . . . . 213
6.3 GM-PHD Filter Recursion . . . . . . . . . . . . . . . . . . . 213
6.4 Experiments and Results . . . . . . . . . . . . . . . . . . . . 215
7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

E Extended Target Tracking Using a Gaussian-Mixture PHD Filter 223


1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
2 Target Tracking Problem Formulation . . . . . . . . . . . . . . . . . 227
3 Gaussian-Mixture Implementation . . . . . . . . . . . . . . . . . . 229
4 Partitioning the Measurement Set . . . . . . . . . . . . . . . . . . . 233
4.1 Distance Partitioning . . . . . . . . . . . . . . . . . . . . . . 233
4.2 Alternative Partitioning Methods . . . . . . . . . . . . . . . 236
4.3 Sub-Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . 237
5 Target Tracking Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 239
6 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.1 Partitioning Methods . . . . . . . . . . . . . . . . . . . . . . 241
6.2 Comparison with GM-PHD . . . . . . . . . . . . . . . . . . 241
6.3 Standard Single Measurement Targets . . . . . . . . . . . . 244
6.4 Unknown Expected Number of Measurements γ . . . . . . 245
7 Experiment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
7.1 Experiment with Close Targets . . . . . . . . . . . . . . . . 250
7.2 Experiment with Occlusion . . . . . . . . . . . . . . . . . . 250
8 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . 252
A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
A.1 Proof of Theorem 10 . . . . . . . . . . . . . . . . . . . . . . 254
A.2 Variable Probability of Detection for the Laser Sensor . . . 254
CONTENTS xv

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256

F Estimating the Shape of Targets with a PHD Filter 259


1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
2 The RFS Extended Target Model . . . . . . . . . . . . . . . . . . . . 263
3 Filtering Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
3.1 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
3.2 Measurement Update . . . . . . . . . . . . . . . . . . . . . . 267
4 RBPF-PHD Implementation . . . . . . . . . . . . . . . . . . . . . . 270
4.1 PHD Prediction and Update . . . . . . . . . . . . . . . . . . 270
4.2 Particle Update . . . . . . . . . . . . . . . . . . . . . . . . . 271
4.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
5 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279

G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF 281


1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
3 Parameter and State Estimation . . . . . . . . . . . . . . . . . . . . 288
3.1 State Trajectory . . . . . . . . . . . . . . . . . . . . . . . . . 288
3.2 Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . 289
3.3 Noise Marginalization . . . . . . . . . . . . . . . . . . . . . 290
4 Models for Comparison . . . . . . . . . . . . . . . . . . . . . . . . . 292
4.1 Augmented State Vector . . . . . . . . . . . . . . . . . . . . 292
4.2 Measurement Noise Estimation . . . . . . . . . . . . . . . . 294
4.3 Summarizing the Four Methods . . . . . . . . . . . . . . . . 295
4.4 Wheel Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . 295
5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304

Index 307
Notation

Operators

Notation Meaning
min minimize
max maximize
arg min A the value x that minimizes A
x
arg max A the value x that maximizes A
x
E expectation
Cov covariance
Var variance
diag(a, b) diagonal matrix with elements a and b
blkdiag(A, B) Block diagonal matrix with matrices A and B
Pr(A) probability of event A
AT transpose of matrix A
Std standard deviation
Tr(A) trace of matrix A
|x| absolute value of x (or cardinality if x is a set)
x̂ estimate of the stochastic variable x
, equal by definition
∼ is distributed according to
∝ proportional to
∈ belongs to
ẋ time derivative of x

xvii
xviii Notation

Probability theory and Distributions

Notation Meaning
iW inverse Wishart distribution
D intesity or phd function
N normal or Gaussian distribution
NiW normal inverse Wishart distribution
Pois Poisson ditribution
St student-t distribution
U uniform distribution
p(x) density function of x
p(x, y) joint density function of x and y
p(x|y) conditional density function of x given y

Sets

Notation Meaning
N set of natural numbers
M set of map variables or features
R set of real numbers
S set of measurement generating points
X set of state variables
Z set of measurements
Nx number of elements in the set X
∅ empty set

Random Variables

Notation Meaning
x state
y measurement (no data association needed)
z measurement (data association needed)
u input
P state covariance
w process noise
Q process noise covariance
e measurement noise
R measurement noise covariance
m map state
p point feature
s state of a measurement generating point
θ parameter
nx dimension of the vector x
x̂k|κ estimate of x at time given measurements y1:κ
Notation xix

Geometry and dynamics

Notation Meaning
c Cartesian representation
p polar representation
x Cartesian position x-coordinate (longitudinal)
y Cartesian position y-coordinate (lateral)
z Cartesian position z-coordinate (vertical)
w width
l length
a acceleration
v velocity
u, v pixel coordinates
d displacement vector
r range (between sensor and target)
ψ heading angle, yaw angle or bearing (angle around z-
axis)

Miscellaneous

Notation Meaning
k discrete time variable
T sample time
lb wheel base (distance between front and rear axle)
lt wheel track (distance between left and right wheel)
co road curvature
c1 road curvature derivative
xx Notation

Abbreviations

Abbreviation Meaning
abs antilock braking system
acc adaptive cruise control
c2c car to car communication
can controller area network
eiv errors in variables
eio errors in output
ekf extended Kalman filter
ela emergency lane assist
et-gm-phd extended target Gaussian mixture probability hypoth-
esis density
fisst finite set statistics
fmcw frequency modulated continuous-wave (a radar sys-
tem)
gm-phd gaussian mixture probability hypothesis density
gnn global nearest neighbor (data association)
gps global positioning system
imu inertial measurement unit
kf Kalman filter
lka lane keeping assistance
ls least Squares
mae mean absolute error
map maximum a posteriori
mc Monte Carlo (experiments based on repeated random
sampling)
mgp measurement generating point (reflection point on tar-
get)
ml maximum likelihood
mpf marginalized particle filter
nn nearest neighbor (data association)
ogm occupancy grid map
olr optical lane recognition
ospa optimal subpattern assignment (multitarget evalua-
tion measure)
pdf probability density function
phd probability hypothesis density
pf particle Filter
rfs random finite set
rmse root mean square error
slam simultaneous localization and mapping
s.t. subject to
ukf unscented Kalman filter
virt virtual sensor measurement
wls weighted least squares
Part I

Background
Introduction
1
This thesis is concerned with the problem of estimating the motion of a vehicle
and the characteristics of its surroundings. More specifically, the description of
the ego vehicle’s surroundings consists of other vehicles and stationary objects
as well as the geometry of the road. The signals from several different sensors,
including camera, radar and inertial sensor, must be combined and analyzed to
compute estimates of various quantities and to detect and classify many objects
simultaneously. Sensor fusion allows the system to obtain information that is
better than if it was obtained by individual sensors.
Situation awareness is the perception of environmental features, the comprehen-
sion of their meaning and the prediction of their status in the near future. It
involves being aware of what is happening in and around the vehicle to under-
stand how the subsystems impact on each other.
Sensor fusion is introduced in Section 1.1 and its application within the automo-
tive community is briefly discussed in Section 1.2. The study presented in this
thesis was conducted within two Swedish research project, briefly described in
Section 1.3 and 1.4. The sensor fusion framework and its components, such as
infrastructure, estimation algorithms and various mathematical models, are all
introduced in Section 1.5. Finally, the chapter is concluded with an overview
of the author’s publications in Section 1.6, a statement of the contributions in
Section 1.7 and the outline of this thesis given in Section 1.8.

1.1 Sensor Fusion


Sensor fusion is the process of using information from several different sensors to
compute an estimate of the state of a dynamic system. The resulting estimate is

3
4 1 Introduction

Sensors Sensor Fusion Applications

State Estimation State


.. Estimate ..
. Process Model .

Measurement Model

Figure 1.1: The main components of the sensor fusion framework are shown
in the middle box. The framework receives measurements from several sen-
sors, fuses them and produces one state estimate, which can be used by sev-
eral applications.

in some sense better than it would be if the sensors were used individually. The
term better can in this case mean more accurate, more reliable, more available
and of higher safety integrity. Furthermore, the resulting estimate may in some
cases only be possible to obtain by using data from different types of sensors.
Figure 1.1 illustrates the basic concept of the sensor fusion framework. Many
systems have traditionally been stand alone systems with one or several sensors
transmitting information to only one single application. Using a sensor fusion
approach it might be possible to remove one sensor and still perform the same
tasks, or add new applications without the need to add new sensors.
Sensor fusion is required to reduce cost, system complexity and the number of
components involved and to increase accuracy and confidence of sensing.

1.2 Automotive Sensor Fusion


Within the automotive industry there is currently a huge interest in active safety
systems. External sensors are increasingly important and typical examples used
in this work are radar sensors and camera systems. Today, a sensor is usually con-
nected to a single function. However, all active safety functions need information
about the state of the ego vehicle and its surroundings, such as the lane geometry
and the position of other vehicles. The use of signal processing and sensor fusion
to replace redundant and costly sensors with software attracted recent attention
in the ieee Signal Processing Magazine (Gustafsson, 2009).
The sensors in a modern passenger car can be divided into a number of sub-
groups; there are internal sensors measuring the motion of the vehicle, external
sensors measuring the objects surrounding the vehicle and there are sensors com-
municating with other vehicles and with the infrastructure. The communication
between sensors, fusion framework, actuators and controllers is made possible by
1.2 Automotive Sensor Fusion 5

the controller area network (can). It is a serial bus communication protocol de-
veloped by Bosch in the early 1980s and presented by Kiencke et al. (1986) at the
sae international congress in Detroit. An overview of the can bus, which has be-
come the de facto standard for automotive communication, is given in Johansson
et al. (2005).

Internal sensors are often referred to as proprioceptive sensors in the literature.


Typical examples are gyroscopes, primarily measuring the yaw rate about the ve-
hicle’s vertical axis, and accelerometers, measuring the longitudinal and lateral
acceleration of the vehicle. The velocity of the vehicle is measured using induc-
tive wheel speed sensors and the steering wheel position is measured using an
angle sensor. External sensors are referred to as exteroceptive sensors in the lit-
erature, typical examples are radar (RAdio Detection And Ranging), lidar (LIght
Detection And Ranging) and cameras.

An example of how a radar and a camera may be mounted in a passenger car


is illustrated in Figure 1.2. These two sensors complement each other very well,
since the advantage of the radar is the disadvantage of the camera and vice versa.
A summary of the two sensors’ properties is presented in Table 1.1 and in e.g.,
Jansson (2005).

As already mentioned, the topic of this thesis is how to estimate the state vari-
ables describing the ego vehicle’s motion and the characteristics of its surround-
ings. The ego vehicle is one subsystem, labeled (E) in this work. The use of
data from the vehicle’s actuators, e.g., the transmission and the steering wheel,
to estimate a change in position over time is referred to as odometry. The ego
vehicle’s surroundings consists of other vehicles, referred to as targets (T), and
stationary objects as well as the shape and the geometry of the road (R). Map-
ping is the problem of integrating the information obtained by the sensors into a
given representation, denoted (M), see Adams et al. (2007) for a recent overview
and Thrun (2002) for an older survey. Simultaneous localization and mapping
(slam) is an approach used by autonomous vehicles to build a map while at the

Table 1.1: Properties of radar and camera for object detection

Camera Radar
Detects other vehicles, lane other vehicles, sta-
markings, pedestri- tionary objects
ans
Classifies objects yes no
Azimuth angle high accuracy medium accuracy
Range low accuracy very high accuracy
Range rate not very high accuracy
Field of View wide narrow
Weather Condi- sensitive to bad visi- less sensitive
tions bility
6 1 Introduction

(a) (b)

Figure 1.2: Figure (a) shows the camera and Figure (b) the front looking
radar in a Volvo S60 production car. Courtesy of Volvo Car Corporation.

same time keeping track of their current locations, see e.g., Durrant-Whyte and
Bailey (2006); Bailey and Durrant-Whyte (2006). This approach is not treated in
this thesis.

1.3 Sensor Fusion for Safety


Parts of the work in this thesis have been performed within the research project
Sensor Fusion for Safety (sefs), which is funded by the Swedish Intelligent Vehi-
cle Safety Systems (ivss) program. The project is a collaboration between Volvo
Technology, Volvo Cars, Volvo Trucks, Mecel, Chalmers University of Technology
and Linköping University.
The overall objective of this project is to obtain sensor fusion competence for au-
tomotive safety applications in Sweden by doing research within relevant areas.
This goal is achieved by developing a sensor fusion platform, algorithms, mod-
eling tools and a simulation platform. More specifically, the aim is to develop
general methods and algorithms for a sensor fusion system utilizing information
from all available sensors in a modern passenger car. The sensor fusion will pro-
vide a refined description of the vehicle’s environment that can be used by a num-
ber of different safety functions. The integration of the data flow requires new
specifications with respect to sensor signals, hardware, processing, architectures
and reliability.
The sefs work scope is divided into a number of work packages. These include at
a top level, fusion structure, key scenarios and the development of requirement
methods. The next level consists in work packages such as pre-processing and
modeling, the implementation of a fusion platform and research done on fusion
algorithms, into which this thesis can be classified. The use-case work package
consists of implementation of software and design of prototypes and demonstra-
tors. Finally, there is an evaluation and validation work package.
During the runtime of the sefs project, i.e., from 2005 until 2009, three PhD the-
ses (Schön, 2006; Gunnarsson, 2007; Danielsson, 2010) and three licentiate the-
1.4 Extended Target Tracking 7

ses (Bengtsson, 2008; Danielsson, 2008; Lundquist, 2009) have been produced.
An overview of the main results in the project is given in Ahrholdt et al. (2009)
and the sensor fusion framework is well described in Bengtsson and Danielsson
(2008). Furthermore it is worth mentioning some of the publications produced by
the project partners. Motion models for tracked vehicles are covered in Svensson
and Gunnarsson (2006); Gunnarsson et al. (2006); Sörstedt et al. (2011). A bet-
ter sensor model of the tracked vehicle is presented in Gunnarsson et al. (2007).
Detection of lane departures and lane changes of leading vehicles is studied in
Schön et al. (2006), with the goal to increase the accuracy of the road geometry es-
timate. Computational complexity for systems obtaining data from sensors with
different sampling rates and different noise distributions is studied in Schön et al.
(2007).
Paper D in this thesis describes a method to estimate and represent stationary
objects along the road edges. This publication is based on data collected from the
sefs prototype car and the work was carried out within the project.

1.4 Extended Target Tracking


The final parts of the work in the thesis have been performed within the frame
project grant Extended Target Tracking (621-2010-4301), founded by the Swedish
Research Council. This research project started in 2011 and lasts until 2014. The
overall goal is to study new ways of representing targets and their posterior dis-
tribution. The work is performed for different types of imagery sensors, such as
camera, radar and laser. More specifically the purpose of the project is to study
different options to extend the state vector of the target with specific and unusual
features, such as physical size and shape, physical properties, colors etc.
So far only a few articles, which relates to the work presented in this thesis, have
been published in the project. It is worth mentioning the following publications.
An approach to track bicycles from imagery sensor data is proposed by Ardeshiri
et al. (2011). It is based on detecting ellipsoids in the images, and treat these pair-
wise using a dynamic bicycle model. Wahlström et al. (2011) shows theoretically
and experimentally that the position and heading of a moving metallic target can
be tracked using two magnetometers placed on the ground, for surveillance and
traffic monitoring applications.
The Papers E and F in this thesis are produced within the extended target tracking
project. In these papers new approaches are proposed to represent and estimate
the position and size of targets which, due to their size, might generate more than
one measurement per time step.

1.5 Components of the Sensor Fusion Framework


A systematic approach to handle sensor fusion problems is provided by nonlin-
ear state estimation theory. Estimation problems are handled using discrete-time
8 1 Introduction

model based methods. The systems discussed in this thesis are primarily dy-
namic and they are modeled using stochastic difference equations. More specif-
ically, the systems are modeled using the discrete-time nonlinear state space
model
xk+1 = fk (xk , uk , wk , θk ), (1.1a)
yk = hk (xk , uk , ek , θk ), (1.1b)
where (1.1a) describes the evolution of the state variable x over time and (1.1b)
explains how the state variable x relates to the measurement y1 . The state vector
at time k is denoted by xk ∈ Rnx , with elements x1 , . . . , xnx being real numbers.
Sensor observations collected at time k are denoted by yk ∈ Rny , with elements
y1 , . . . , yny being real numbers. The model fk in (1.1a) is referred to as the pro-
cess model , the motion model , the dynamic model or the system model, and it
describes how the state propagates in time. The model hk in (1.1b) is referred to
as the measurement model or the sensor model and it describes how the state
is propagated into the measurement space. The random vector wk describes the
process noise, which models the fact that the actual state dynamics is usually
unknown. The random vector ek describes the sensor noise. Furthermore, uk
denotes the deterministic input signals and θk denotes the possibly unknown
parameter vector of the model.
The ego vehicle constitutes an important dynamic system in this thesis. The yaw
and lateral dynamics are modeled using the so called single track model. This
model will be used as an example throughout the thesis. Some of the variables
and parameters in the model are introduced in Example 1.1.
1.1 Example: Single Track Ego Vehicle Model
A so called bicycle model is obtained if the wheels at the front and the rear axle of
a passenger car are modeled as single wheels. This type of model is also referred
to as a single track model and a schematic drawing is given in Figure 1.3. Some
examples of typical variables and parameters are:
State variables x: the yaw rate ψ̇E and the body side slip angle β, i.e.,
h iT
x = ψ̇E β . (1.2)

Measurements y: the yaw rate ψ̇E and the lateral acceleration ay , i.e.,
h iT
y = ψ̇E ay , (1.3)
which both are measured by an inertial measurement unit (imu).
Input signals u: the steering wheel angle δs , which is measured with an angu-
lar sensor at the steering column and the velocity v̇, which is measured by
1 Note that the measurement vector is denoted y when the sensor measures one or several specific
quantities, e.g., acceleration and yaw rate from an imu sensor. However, when the number of sensor
measurements (observations) depends on the environmental circumstances the measurement vector
is denoted z, e.g., for a radar which observes an unknown number of targets. This usually leads to a
data association problem.
1.5 Components of the Sensor Fusion Framework 9

ρ αf
δf
v
β
ψE y W
CoG
αr
OW x

Figure 1.3: Illustration of the geometry for the single track model, describing
the motion of the ego vehicle. The ego vehicle velocity vector v is defined
from the center of gravity (CoG) and its angle to the longitudinal axis of the
vehicle is denoted by β, referred to as the body side slip angle. Furthermore,
the slip angles are referred to as αf and αr . The front wheel angle is denoted
by δf and the current driven radius is denoted by ρ.

wheel speed sensors, i.e.,


h iT
u = δs v . (1.4)

Parameters θ: the vehicle mass m, which is weighed before the tests, the steering
ratio is between the steering wheel angle and the front wheels, which has to
be estimated in advance, and the tire parameter Cα , which is estimated on-
line, since the parameter value changes due to different road and weather
conditions.

The nonlinear models f and h are specified in Chapter 2.

The model (1.1) must describe the essential properties of the system, but it must
also be simple enough to be efficiently used within a state estimation algorithm.
Chapter 3 describes algorithms that are used to compute estimates of the state xk
and the parameter θk in (1.1).

Before describing the individual steps of the sensor fusion framework another
important example is presented in Example 1.2.

1.2 Example: Object Tracking


Other objects, such as vehicles or stationary objects on and along the road, are
tracked using measurements from a radar mounted in the ego vehicle. A simple
model for one such tracked object is given by using the following variables:
10 1 Introduction

State variables x: Cartesian position of tracked targets i = 1, . . . , Nx in a world


h iT
fixed coordinate frame W , i.e., x(i) = xW yW .
Measurements z: Range and azimuth angle to objects m = 1, . . . , Nz measured by
h iT
the radar in the ego vehicle fixed coordinate frame E, i.e., z(m) = r E ψ .
At every time step k, Nz observations are obtained by the radar. Hence, the
radar
n deliverso Nz range and azimuth measurements in a multi-sensor set Z =
z(1) , . . . , z(Nz ) to the sensor fusion framework. The sensor fusion framework
currently
n alsoo tracks Nx targets. The multi-target state is given by the set X =
x(1) , . . . , x(Nx ) where x(1) , . . . , x(Nx ) are the individual states.
Obviously, the total number of state variables in the present example is 2Nx and
the total number of measurements is 2Nz . This issue may be compared to Exam-
ple 1.1, where the size of the y-vector corresponds to the total number of mea-
surements at time k. Typically, the radar also observes false detections, referred
to as clutter, or receives several measurements from the same target, i.e., Nz is
seldom equal to Nx for radar sensors.

The different steps of a typical sensor fusion algorithm, as the central part of
the larger framework, are shown in Figure 1.4. The algorithm is initiated using
a prior guess of the state x0 or, if it is not the first iteration, the state estimate
x̂k−1|k−1 from the previous time step k − 1 is used. New measurements Zk are col-
lected from the sensors and preprocessed at time k. Model (1.1) is used to predict
the state estimate x̂k|k−1 and the measurement ẑk|k−1 . For Example 1.2 it is nec-
essary to associate the radar observations Zk with the predicted measurements
Zk|k−1 of the existing state estimates and to manage the tracks, i.e., initiate new
b
states and remove old, invalid states. The data association and track management
are further discussed in Section 4.2. Finally, the new measurement zk is used to
calculate the state estimate x̂k|k at time k in the so called measurement update
step. The prediction and measurement update are described in Section 3.2. This
algorithm is iterated, x̂k|k is used to predict x̂k+1|k , new measurements Zk+1 are
collected at time k + 1 and so on. The state estimation theory, as part of the sensor
fusion framework, is discussed further in Chapter 3. Note that in Example 1.1,
the data association and track management are obviously not needed, since there
the data association is assumed fixed. In the example the measurements y from
the pre-processing are fed directly to the measurement update.

1.6 Publications
Published work of the author that are of relevance to this thesis are listed below.
The publications are clustered in groups preambled by a short summary. The
publications are principally listed in chronological order.
The author has been involved in the development of an active steering system
prior to starting his PhD-studies. The active steering system superimposes an
1.6 Publications 11

Zk pre-
sensor
processing

Zk

p(xk |z1:k−1 ) data Zk , Λk track Zk , Λk measurement p(xk |z1:k )


prediction
association p(xk |z1:k−1 ) management p(xk |z1:k−1 ) update

p(xk−1 |z1:k−1 ) p(xk |z1:k )


time step

Figure 1.4: The new measurements Zk contain new information and are as-
sociated to the predicted states b
Xk|k−1 and thereafter used to update them to
obtain the improved state estimates b Xk|k .

electronically controlled angle to the driver’s steering input. The aim of the sys-
tem is to reduce the driver’s steering efforts at low velocities, to reduce the vehi-
cle’s sensibility at very high velocities and to intervene when the vehicle tends to
become unstable. The system is thoroughly described in
W. Reinelt and C. Lundquist. Mechatronische Lenksysteme: Mod-
ellbildung und Funktionalität des Active Front Steering. In R. Is-
ermann, editor, Fahrdynamik Regelung - Modellbildung, Fahrassis-
tenzsysteme, Mechatronik, pages 213–236. Vieweg Verlag, September
2006a.
W. Reinelt, W. Klier, G. Reimann, C. Lundquist, W. Schuster, and R.
Großheim. Active front steering for passenger cars: System modelling
and functions. In Proceedings of the IFAC Symposium on Advances
in Automotive Control, Salerno, Italy, April 2004.
Sensor fusion based monitoring systems are described in
W. Reinelt, C. Lundquist, and H. Johansson. On-line sensor monitor-
ing in an active front steering system using extended Kalman filtering.
In Proceedings of the SAE World Congress, SAE paper 2005-01-1271,
Detroit, MI, USA, April 2005.
W. Reinelt and C. Lundquist. Observer based sensor monitoring in
an active front steering system using explicit sensor failure modeling.
In Proceedings of the IFAC World Congress, Prague, Czech Republic,
July 2005.
S. Malinen, C. Lundquist, and W. Reinelt. Fault detection of a steering
wheel sensor signal in an active front steering system. In Preprints
of the IFAC Symposium on SAFEPROCESS, pages 547–552, Beijing,
China, August 2006.
C. Lundquist and W. Reinelt. Electric motor rotor position monitor-
ing method for electrically aided steering system e.g. steer by wire,
12 1 Introduction

for motor vehicle, involves outputting alarm when difference between


measurement value and estimated value of motor exceeds threshold.
German Patent Application DE 102005016514 October 12, 2006, Pri-
ority date April 8, 2006.

W. Reinelt, C. Lundquist, and S. Malinen. Automatic generation of a


computer program for monitoring a main program to provide opera-
tional safety. German Patent Application DE 102005049657 April 19,
2007, Priority date October 18, 2005.

W. Reinelt, W. Schuster, R. Großheim, and C. Lundquist. Operat-


ing method for electronic servo steering system of vehicle, involves
presetting steering wheel angle by steering mechanism as measure
for desired wheel turning angle for steering wheel of vehicle. Ger-
man Patent Application DE 102006052092 May 8, 2008, Priority date
November 4, 2006.

W. Reinelt, W. Schuster, R. Großheim, and C. Lundquist. Electronic


servo steering system operating method for motor vehicle, involves
recognizing track misalignment of vehicle when forces differentiate
around preset value from each other at preset period of time in magni-
tude and/or direction. German Patent DE 102006043069 December 3,
2009, Priority date September 14, 2006.

W. Reinelt, W. Schuster, R. Großheim, and C. Lundquist. Motor vehi-


cle’s electronically regulated servo steering system operating method,
involves comparing actual value of measured value with stored prac-
tical value of corresponding measured value. German Patent DE 10
2006 040 443 January 27, 2011, Priority date August 29, 2006.
Sensor fusion based utility functions to improve driving comfort, safety or agility
are described in

W. Reinelt, W. Schuster, R. Großheim, and C. Lundquist. Operating


method for electronic power-assisted steering system of vehicle, in-
volves overlapping additional angle, which is disabled after re-start
of utility function. German Patent Application DE 102006041236
Mars 6, 2008, Priority date September 2, 2006a.

W. Reinelt, W. Schuster, R. Großheim, and C. Lundquist. Operating


method for electronic power-assisted steering system of vehicle, in-
volves re-starting utility function, and after re-start of utility func-
tion superimposition of additional angle is unlatched. German Patent
DE 102006041237 December 3, 2009, Priority date September 2, 2006b.

G. Reimann and C. Lundquist. Method for operating electronically


controlled servo steering system of motor vehicle, involves determin-
ing steering wheel angle as measure for desired steering handle an-
gle by steering handle for steering wheels of motor vehicle. German
1.6 Publications 13

Patent Application DE 102006053029 May 15, 2008, Priority date


November 10, 2006.
C. Lundquist and R. Großheim. Method and device for determining
steering angle information. German Patent Application DE 10 2007
000 958 Mai 14, 2009, International Patent Application WO 2009
047 020 April 16, 2009 and European Patent Application EP 2205478
April 16, 2009, Priority date October 2, 2007.
A reverse driving assistant for passenger cars with trailers based on sensor data
of the angle between car and trailer is presented in
C. Lundquist and W. Reinelt. Rückwärtsfahrassistent für PKW mit
Aktive Front Steering. In Proceedings of the AUTOREG (Steuerung
und Regelung von Fahrzeugen und Motoren, VDI Bericht 1931, pages
45–54, Wiesloch, Germany, March 2006b.
C. Lundquist and W. Reinelt. Back driving assistant for passenger cars
with trailer. In Proceedings of the SAE World Congress, SAE paper
2006-01-0940, Detroit, MI, USA, April 2006a.
W. Reinelt and C. Lundquist. Method for assisting the driver of a mo-
tor vehicle with a trailer when reversing. German Patent DE 10 2006
002 294 February 24, 2011, European Patent Application EP 1810913
July 25, 2007 and Japanese Patent Application JP 2007191143 Au-
gust 2, 2007, Priority date January 18, 2006.
Furthermore, a control system which stabilizes a vehicle trailer combination, based
on the same sensor information as the publications listed above, is described in
C. Lundquist. Method for stabilizing a vehicle combination. U.S.
Patent US 8010253 August 30, 2011 and German Patent Application
DE 102007008342 August 21, 2008, Priority date February 20, 2007.
The functional safety concept for the active steering system, among others includ-
ing a sensor fusion based monitoring system, is described in
W. Reinelt and C. Lundquist. Controllability of active steering sys-
tem hazards: From standards to driving tests. In Juan R. Pimintel,
editor, Safety Critical Automotive Systems, pages 173–178. SAE In-
ternational, 400 Commonwealth Drive, Warrendale, PA, USA, August
2006b.
During his time as a PhD student the author has been involved in the following
publications. Starting with road geometry estimation, i.e., describing the curva-
ture of the lane, which is covered in
C. Lundquist and T. B. Schön. Road geometry estimation and vehicle
tracking using a single track model. In Proceedings of the IEEE Intel-
ligent Vehicles Symposium, pages 144–149, Eindhoven, The Nether-
lands, June 2008.
14 1 Introduction

C. Lundquist and T. B. Schön. Joint ego-motion and road geometry


estimation. Information Fusion, 12:253–263, October 2011.
Road edge estimation primarily aims to estimate the position and shape of sta-
tionary objects next to the road. Two approaches have been studied; road edges
modeled as extended targets are described in
C. Lundquist and T. B. Schön. Estimation of the free space in front
of a moving vehicle. In Proceedings of the SAE World Congress, SAE
paper 2009-01-1288, Detroit, MI, USA, April 2009a.
C. Lundquist, U. Orguner, and T. B. Schön. Tracking stationary ex-
tended objects for road mapping using radar measurements. In Pro-
ceedings of the IEEE Intelligent Vehicles Symposium, pages 405–410,
Xi’an, China, June 2009.
C. Lundquist, U. Orguner, and F Gustafsson. Estimating polynomial
structures from radar data. In Proceedings of the International Con-
ference on Information Fusion, Edinburgh, UK, July 2010b.
C. Lundquist, U. Orguner, and F. Gustafsson. Extended target track-
ing using polynomials with applications to road-map estimation. IEEE
Transactions on Signal Processing, 59(1):15–26, January 2011c.
and an approach where the stationary objects along the road are represented by
an intensity map is presented in
C. Lundquist, L. Danielsson, and F. Gustafsson. Random set based
road mapping using radar measurements. In Proceedings of the Eu-
ropean Signal Processing Conference, pages 219–223, Aalborg, Den-
mark, August 2010a.
C. Lundquist, L. Hammarstrand, and F. Gustafsson. Road intensity
based mapping using radar measurements with a probability hypoth-
esis density filter. IEEE Transactions on Signal Processing, 59(4):1397–
1408, April 2011b.
A method to on-line estimate the cornering stiffness parameters of the tires is
presented in
C. Lundquist and T. B. Schön. Recursive identification of cornering
stiffness parameters for an enhanced single track model. In Proceed-
ings of the IFAC Symposium on System Identification, pages 1726–
1731, Saint-Malo, France, July 2009b.
An overview of the results from the sefs project, and specifically an overview
about mapping techniques, is given in
M. Ahrholdt, F. Bengtsson, L. Danielsson, and C. Lundquist. SEFS
– results on sensor data fusion system development. In Proceedings
of the World Congress on Intelligent Transportation Systems and Ser-
vices, Stockholm, Sweden, September 2009.
1.6 Publications 15

C. Lundquist. Automotive Sensor Fusion for Situation Awareness. Li-


centiate Thesis No 1422, Department of Electrical Engineering, Lin-
köping University, Sweden, 2009.

C. Lundquist, T. B. Schön, and F. Gustafsson. Situational awareness


and road prediction for trajectory control applications. In A. Eskan-
darian, editor, Handbook of Intelligent Vehicles, chapter 24. Springer,
November 2011e.

Extended target tracking with a phd filter is described in

K. Granström, C. Lundquist, and U Orguner. A Gaussian mixture


PHD filter for extended target tracking. In Proceedings of the In-
ternational Conference on Information Fusion, Edinburgh, UK, July
2010.

C. Lundquist, K. Granström, and U. Orguner. Estimating the shape of


targets with a PHD filter. In Proceedings of the International Confer-
ence on Information Fusion, Chicago, IL, USA, July 2011a.

U. Orguner, C. Lundquist, and K. Granström. Extended target track-


ing with a cardinalized probability hypothesis density filter. In Pro-
ceedings of the International Conference on Information Fusion, pages
65–72, Chicago, IL, USA, July 2011.

K. Granström, C. Lundquist, and U Orguner. Tracking rectangular


and elliptical extended targets using laser measurements. In Proceed-
ings of the International Conference on Information Fusion, Chicago,
IL, USA, July 2011b.

K. Granström, C. Lundquist, and U. Orguner. Extended target track-


ing using a Gaussian-mixture PHD filter. IEEE Transactions on Aero-
space and Electronic Systems, 2011a. Under review.

An approach to use camera data to improve the position estimate of a vehicle is


presented in

E. Nilsson, C. Lundquist, T. B. Schön, D. Forslund, and J. Roll. Vehicle


motion estimation using an infrared camera. In Proceedings of the
World Congress of the International Federation of Automatic Control,
Milan, Italy, August 2011.

A marginalized particle filter based method to estimate the tire radii, and thereby
being able to detect pressure losses is presented in

E. Özkan, C. Lundquist, and F. Gustafsson. A Bayesian approach to


jointly estimate tire radii and vehicle trajectory. In Proceedings of the
IEEE Conference on Intelligent Transportation Systems, Washington
DC, USA, October 2011.
16 1 Introduction

C. Lundquist, E. Özkan, and F. Gustafsson. Tire radii estimation us-


ing a marginalized particle filter. IEEE Transactions on Intelligent
Transportation Systems, 2011d. Submitted.
Finally, the pedagogic contribution
C. Lundquist, M. Skoglund, K. Granström, and T. Glad. How peer-
review affects student learning. In Utvecklingskonferens för Sveriges
ingenjörsutbildingar, Norrköping, Sweden, November 2011f.
describes how peer-review can improve student’s learning. This work is imple-
mented in a course in sensor fusion.

1.7 Contributions
The main contributions of this thesis are briefly summarized and presented be-
low:
Mapping: An overview of different methods to map stationary objects in the sur-
roundings of the vehicle are summarized in Paper A.
Road curvature estimation: The estimation from camera and radar is improved
by using information about the ego vehicle motion. The results are pre-
sented in Paper B.
Road edge estimation: Errors in variables methods are applied in order to track
the road edges using point measurements, and the results are given in Pa-
per C. The edges are modeled as extended targets.
Intensity based map: An intensity function is used to represent stationary ob-
jects along the road. Structures in the road are used to improve the update
and representation of the map as described in Paper D.
Extended target tracking: A modification of the gm-phd filter, which is able to
handle extended targets, is presented in Paper E.
Target size estimation: An approach to estimate the size and the shape is intro-
duced in Paper F. The tracking framework contains a hybrid state space
where measurement generating points and the measurements are modeled
by random finite sets and target states by random vectors.
Tire radii estimation: The marginalized particle filter is applied to the problem
of estimating the tire radii of a vehicle. The Bayesian method, presented in
Paper G, is efficient and the estimate is more accurate than estimates from
comparable methods.

1.8 Thesis Outline


The thesis is divided into two parts. The first part contains an overview of the
research area and background theory for the second part, which contains edited
1.8 Thesis Outline 17

versions of published papers.

1.8.1 Outline of Part I


An overview of vehicle, road and target models is given in Chapter 2, as well
as some methods to simplify and make the models applicable for automotive
applications. The chapter is concerned with the inner part of the model based es-
timation process i.e., the process model and the measurement model illustrated
by the two white rectangles in Figure 1.1.
Chapter 3 describes estimation and filtering algorithms, which have been applied
to derive the results in Part II. The estimation process is illustrated by the gray
rectangle in Figure 1.1, and the chapter has a tutorial character.
Target tracking is the subject of Chapter 4, and it describes the framework which
takes care of incoming measurements. The framework is illustrated by the black
rectangle in Figure 1.1.
Chapter 5 describes a filtering framework which propagates the so called phd,
being an approximation of a finite random set of targets. The phd-filter unifies
the track management and the estimation process, i.e., the black and the gray box
in Figure 1.1.
Finally, the work is summarized and the next steps for future work are given in
Chapter 6.

1.8.2 Outline of Part II


Part II presents a collections of publications where the author of this thesis has
been involved. The aim of this section is to give a very brief abstract to the publi-
cations, describe how they relate to each other and emphasize the authors contri-
bution.
The first four papers are concerned about estimating and mapping the road and
the surroundings of the vehicle.
Paper A,
C. Lundquist, T. B. Schön, and F. Gustafsson. Situational awareness
and road prediction for trajectory control applications. In A. Eskan-
darian, editor, Handbook of Intelligent Vehicles, chapter 24. Springer,
November 2011e.
gives an overview of mapping methods for automotive applications. The paper
summarizes both standard approaches, of which some have been implemented
by the author, and methods developed by the author in other publications. There
is a slight overlap of the material presented in the paper with other papers in this
thesis. The author has written the major part of this paper, besides the section
covering lane tracking using a camera.
Paper B,
18 1 Introduction

C. Lundquist and T. B. Schön. Joint ego-motion and road geometry


estimation. Information Fusion, 12:253–263, October 2011.
presents early work of the author. It is shown how the road curvature estimate
can be improved and made more robust by fusing lane marking information with
the position of other vehicles on the road and knowledge about the ego vehicles
motion. The lane information is obtained by a camera, the other vehicles are
tracked by a radar and the ego vehicle’s motion is measured with e.g., an imu.
The authors contribution is the model which connects the ego vehicle’s motion to
the estimate of the curvature.
Paper C,
C. Lundquist, U. Orguner, and F. Gustafsson. Extended target track-
ing using polynomials with applications to road-map estimation. IEEE
Transactions on Signal Processing, 59(1):15–26, January 2011c.
describes a method to track the road edges as extended targets. This is the first
article in a series where the author is dealing with extended targets. In this paper
the targets are represented by polynomials, which represent e.g., guardrails along
the road. The presented ideas are primarily the author’s contribution, however,
Dr. Umut Orguner has assisted with his knowledge in the area of target tracking
to make it possible to implement the ideas.
Paper D,
C. Lundquist, L. Hammarstrand, and F. Gustafsson. Road intensity
based mapping using radar measurements with a probability hypoth-
esis density filter. IEEE Transactions on Signal Processing, 59(4):1397–
1408, April 2011b.
is the last of the four papers describing models and algorithms to represent and
estimate the road. In this paper objects along road edges are represented by an
intensity function. The intensity in each position of the map describes the density
of targets in that position. Dr. Lars Hammarstrand came up with the initial ideas
to this paper, the realization of the idea was formed in discussion with the author.
The modifications to the standard phd filter were implemented by the author,
who also wrote the major part of the paper.
The next two papers treat methods to track moving targets, e.g., vehicles, which
have an extent.
Paper E,
K. Granström, C. Lundquist, and U. Orguner. Extended target track-
ing using a Gaussian-mixture PHD filter. IEEE Transactions on Aero-
space and Electronic Systems, 2011a. Under review.
introduces a phd filter based method to track targets which potentially might
give rise to more than one measurement. The authors came up with the idea
to this paper at a conference where the mathematical framework was presented.
1.8 Thesis Outline 19

The contribution of this paper is the implementation, including the necessary


assumptions, which must be made to make the theory computationally tractable.
All authors of this paper have contributed to the presented approach, however
the major part of the implementation and the writing was made by Karl Gran-
ström.
Paper F,
C. Lundquist, K. Granström, and U. Orguner. Estimating the shape of
targets with a PHD filter. In Proceedings of the International Confer-
ence on Information Fusion, Chicago, IL, USA, July 2011a.
presents a new approach to track the size and the shape of a target. Since the out-
put of many off-the-shelf sensors are point observations, a model must be found
to relate point measurements to a shape. In this paper a random finite set de-
scribes points on the target’s surface, which are assumed to have generated the
measurements. The idea is mainly developed by the author, also the implemen-
tation and writing were primarily performed by the author. However, the results
were improved through the valuable advice regarding target tracking given by
Dr. Umut Orguner.
The last paper is concerned about the ego vehicle only.
Paper G,
C. Lundquist, E. Özkan, and F. Gustafsson. Tire radii estimation us-
ing a marginalized particle filter. IEEE Transactions on Intelligent
Transportation Systems, 2011d. Submitted.
is based on theories recently developed by among others the two co-authors Dr.
Emre Özkan and Professor Fredrik Gustafsson. In this paper these theories are
applied to the problem of estimating the tire radii of a vehicle. The primary aim is
to detect tire pressure losses. The author has contributed by implementing these
theories, by applying them on real data and by writing major part of the paper.
Dr. Emre Özkan has helped to implement the method based on his knowledge in
the area.
Models of Dynamic Systems
2
Given measurements from several sensors the objective is to estimate one or sev-
eral state variables, either by means of improving a measured signal or by means
of estimating a signal which is not, or can not, be directly measured. In either
case the relationship between the measured signals and the state variable must
be described, and the equations describing this relationship is referred to as the
measurement model. When dealing with dynamic systems, as is commonly the
case in automotive applications, the objective might be to predict the value of the
state variable at the next time step. The prediction equation is referred to as the
process model. This section deals with these two types of models.
As mentioned in the introduction in Section 1.5, a general model of dynamic
systems is provided by the nonlinear state space model
xk+1 = fk (xk , uk , wk , θk ), (2.1a)
yk = hk (xk , uk , ek , θk ). (2.1b)
Most mechanical and physical laws are provided in continuous-time, but com-
puter implementations are made in discrete-time, i.e., the process and measure-
ment models are derived in continuous-time t according to
ẋ(t) = a(x(t), u(t), w(t), θ(t), t), (2.2a)
y(t) = c(x(t), u(t), e(t), θ(t), t), (2.2b)
and are then discretized. Discretization is the topic of Section 2.2. Special cases
of the general state space model (2.1), such as the state space model with addi-
tive noise and the linear state space model, are discussed in Section 2.3 and 2.4,
respectively.
Several models for various applications are given in the papers in Part II. This

21
22 2 Models of Dynamic Systems

chapter therefore begins with an introduction to some of these models in Sec-


tion 2.1.

2.1 Overview of the Models Used in the Thesis


The single track model was introduced in Example 1.1 and it used as an example
throughout the first three chapters of this thesis. Furthermore, the model is a vi-
tal part for deriving the results in Paper B. The process and measurement models
of the single track model are given in Example 2.1.
2.1 Example: Single Track Model
The state variables x, the input signals u and the measurement signals y of the
ego vehicle model were defined in Example 1.1, and are repeated here for conve-
nience
h iT
x = ψ̇E β , (2.3a)
h iT
u = δf v , (2.3b)
h iT
y = ψ̇E ay . (2.3c)
Note that the front wheel angle δf is used directly as an input signal to simplify
the example. The continuous-time single track process and measurement models
are given by
 C l 2 cos δ +C l 2 
 − αf f f αr r −Cαf lf cos δf +Cαr lr Cαf lf tan δf 
Izz v ψ̇ E + Izz β + Izz

ẋ =   , (2.4a)
 
− 1 + C αf l f cos δ f −C αr l r C αf cos δ f +C αr C αf sin δ f

2
v m
ψ̇E − mv β+ mv

" #
ψ̇E
y= −Cαf lf cos δf +Cαr lr Cαf cos δf +Cαr Cαf sin δf , (2.4b)
mv ψ̇E − m β+ m
with parameter vector
h i
θ = lf lr Izz m Cαf Cαr , (2.5)
where lf and lr denotes the distances between the center of gravity of the vehicle
and the front and rear axles, respectively. Furthermore, m denotes the mass of
the vehicle and Izz denotes the moment of inertia of the vehicle about its vertical
axis in the center of gravity. The parameters Cαf and Cαf are called cornering
stiffness and describe the road tire interaction. Typical values for the parameters
are given in Table 2.1. The model is derived in many vehicle dynamics books,
e.g., Mitschke and Wallentowitz (2004); Wong (2001); Gillespie (1992); Rajamani
(2006).

Road models are treated in Paper A to D. A good overview of available and com-
monly used road models is given in Paper A, and therefore no introduction to
road modeling in general is given in this chapter. However, road models are of-
ten combined with ego vehicle models and described in one single state space
2.1 Overview of the Models Used in the Thesis 23

Table 2.1: Typical ranges for the vehicle parameters used in the single track
model.

m Izz Cα lf + lr
kg kgm2 N/rad m

1000 − 2500 850 − 5000 45000 − 75000 2.5 − 3.0

model. One approach is presented in Paper B and compared with two other vehi-
cle models with road interaction, which are described as follows.
The first model is described in Example 2.2 and it is commonly used for au-
tonomous driving and lane keeping. This model is well described by e.g., Dick-
manns (2007) and Behringer (1997). Note that the ego vehicle’s motion is mod-
eled with respect to a road fixed coordinate frame, unlike the single track model
in Example 2.1, which is modeled in a Cartesian world coordinate frame.

2.2 Example: Vehicle Model with Road Interaction


The relative angle between the vehicle’s longitudinal axis and the tangent of the
road is denoted ψRE . The notation is illustrated in the Figures 1 to 4 in Paper B
(Pages 140-144). Ackermann’s steering geometry is used to obtain the relation
v
ψ̇RE = δf − v · c0 , (2.6)
lb
where the current curvature of the road c0 is the inverse of the road’s radius. The
lateral displacement of the vehicle in the lane is given by
l˙R = v(ψRE + β). (2.7)
A process model for the body side slip angle was given in (2.4a), but since the yaw
rate ψ̇ is not part of the model in this section, equation (2.4a) has to be rewritten
according to
Cαf cos δf + Cαr
β̇ = − β
mv
Cαf lf cos δf − Cαr lr vx Cαf
!
− 1+ 2
tan δf + sin δf , (2.8)
v m lb mv
which is further simplified by assuming small angles, to obtain a linear model
according to
Cαf + Cαr Cαf
!
v
β̇ = − β+ − δ . (2.9)
mv mv lb f

The steering wheel angle might have a bias, for example if the sensor is not cal-
ibrated, which leads to an accumulation of the side slip angle β in (2.9). Other
reasons for a steering wheel angle bias is track torsion or strong side wind, which
the driver compensates for with the steering wheel. The problem is solved by
24 2 Models of Dynamic Systems

introducing an offset to the front wheel angel as a state variable according to


δfm = δf + δfoffs . (2.10)

Note that the curvature c0 is included in (2.6). The curvature c0 is usually treated
as a state variable and in this example the vehicle motion model is augmented
with the simple clothoid road model, see Paper A.

To summarize, the state variable vector is defined as


ψRE  relative angle between vehicle and road
   
 l  
 R   lateral displacement of vehicle in lane 
 β  
vehicle body side slip angle
   

x =  δf  = 
  
front wheel angle (2.11)
  
 offs   
δf   front wheel angle bias offset 

  
road curvature at the ego vehicle
 c   
 0   

c1 curvature derivative
and the process model is given by
vx
lb δf − vc0
 
 
v(ψRE + β)
 
ψ̇RE  
  
 
 ˙   Cαf +Cαr Cαf v
 lR  − mv β + mv − lb δf 


 β̇  
ẋ =   =  wδf
 .
 (2.12)
 δ̇   
 f  
 offs   0 
δ̇f

vc
 

 0 

wc1
With the camera it is possible mo measure ψRE , lR and c0 . The front wheel angle
δf can be derived from the steering wheel angle δs if the steering geometry is
known; i.e., the measurement vector is
h iT
y = ψRE lR c0 δs (2.13)
and the measurement model is trivial. This model is used in the approach re-
ferred to as “fusion 3” in Paper B.

Another and simpler vehicle model is obtained if the side slip angle is omitted
and the ego vehicle’s yaw rate ψ̇E is used instead of the steering wheel angle. The
model is summarized in Example 2.3, and thoroughly described together with
results in Eidehall (2007); Eidehall et al. (2007); Eidehall and Gustafsson (2006);
Gern et al. (2000, 2001); Zomotor and Franke (1997).

More advanced vehicle models with more degrees of freedom, including the two
track model, are described by Schofield (2008).
2.1 Overview of the Models Used in the Thesis 25

2.3 Example: Simplified Vehicle Model with Road Interaction


The state variable vector is here defined as
h iT
x = ψRE lR ψ̇E c0 c1 , (2.14)
and the process model is simply given by
 vx c0 + ψ̇E 
 
ψ̇

RE
 ˙   vx ψRE 

 l  
ẋ =  E  =  wψ̇RE  . (2.15)

 ċ0  
   vc0 
ċ1
wc1
 

The measurement vector is


h iT
y = ψRE lR c0 ψ̇E (2.16)
where the yaw rate is measured by the vehicle’s imu. This model is used in the
approach referred to as “fusion 2” in Paper B.

In this work, only measurements from the ego vehicle’s sensors are available; that
is the target’s motion is measured using the ego vehicle’s radar and camera. This
is the reason for why the target model is simpler than the ego vehicle model. The
targets play an important role in the sensor fusion framework presented in this
work, but little effort has been spent modeling their motion. Instead standard
models from target tracking literature are used. A survey of different process
models and measurement models are given by Rong Li and Jilkov (2003) and
Rong Li and Jilkov (2001), respectively. The subject is also covered in the books
by Blackman and Popoli (1999) and Bar-Shalom et al. (2001). One typical target
model is given in Example 2.4. This model is used in the Papers F and G, and a
somewhat similar model, called constant velocity model, is used in Paper E.

2.4 Example: Coordinated Turn Model


The coordinated turn model is commonly used to model moving targets. The ego
vehicle’s radar and camera measures the range rTi Es and the azimuth angle ψTi Es
to target number i as described in the introduction in Example 1.2. The states of
the coordinated turn model in polar velocity are given by
 W  
xTi W  x-position in W -frame

 W   
yT W  y-position in W -frame
 i   
x =  vTi  =  longitudinal velocity  . (2.17)
heading angle
   
 ψT   
 i   
ψ̇Ti yaw rate
26 2 Models of Dynamic Systems

The process and measurement models are given by


 W   T
ẋTi W  v i cos ψTi   0 
  
 W   Ti
ẏTi W   v sin ψTi   0 
  
 +  wv̇Ti 
 Ti  
ẋ =  v̇  = 
 0    (2.18a)
ψ̇Ti   0 
  
 ψ̇T   
 
 
 i    w 
ψ̈Ti 0 ψ̈Ti
q   
#  xW − xW − xE 2 + yW − yW − yE 2 
 
"
r  Ti W EW Es E Ti W EW Es E 
z = Ti Es =   + e (2.18b)
 
yW
ψTi Es  Ti W
arctan W − ψE − ψEs E


xT
iW

where (xEEs E , yEEs E , ψEs E )


represents the sensor mounting position and orientation
in the ego vehicle coordinate frame E. The ego vehicle’s position in the world
frame (xW W
EW , yEW ) is included in the measurement model of the target. To be able
to estimate the ego vehicles position the single track ego vehicle state variable
vector and state space model (2.4) has to be augmented with the ego vehicle’s
position. Note that the ego vehicle’s heading angle is denoted ψE here to differen-
tiate it from the heading angle of the target.

2.2 Discretizing Continuous-Time Models


The measurements dealt with in this work are sampled and handled as discrete-
time variables in computers and electronic control units (ecu). All sensor signals
are transferred in sampled form from different sensors to the log-computer using
the so called can-Bus (Controller Area Network). Hence, the systems discussed
in this thesis must also be described using discrete-time models according to the
state space model in (2.1). Nevertheless, since physical relations commonly are
given in continuous-time, the various systems presented in this thesis, such as the
single track model in Example 2.1, are derived and represented using continuous-
time state space models in the form (2.2). Thus, all continuous-time models in
this thesis have to be discretized in order to describe the measurements. Only
a few of the motion models can be discretized exactly by solving the sampling
formula
k+T
Z
xk+1 = xk + a(x(t), u(t), w(t), θ(t))dt, (2.19)
k
analytically, where T denotes the sampling time.
2.2 Discretizing Continuous-Time Models 27

2.5 Example: Exact Sampling of the Coordinated Turn Model


Consider the continuous-time coordinate turn model in Example 2.4. The ana-
lytic solution of (2.19) is
T
2vk i
!
W W ψ̇Ti ,k T ψ̇Ti ,k T
xTi W ,k+1 = xTi W ,k + sin cos ψTi ,k + (2.20a)
ψ̇Ti ,k 2 2
T
2vk i
!
W W ψ̇Ti ,k T ψ̇Ti ,k T
yTi W ,k+1 = yTi W ,k + sin sin ψTi ,k + (2.20b)
ψ̇Ti ,k 2 2
T
i T
vk+1 = vk i (2.20c)
ψTi ,k+1 = ψTi ,k + T ψ̇Ti ,k (2.20d)
ψ̇Ti ,k+1 = ψ̇Ti ,k . (2.20e)

Simpler than using the exact sampling formula (2.19) is it to make use of the
standard forward Euler method, which approximates (2.2a) according to
xk+1 ≈ xk + T a(xk , uk , wk , θk ) , fk (xk , uk , wk , θk ). (2.21)
This is a very rough approximation with many disadvantages, but it is frequently
used due to its simplicity. Example 2.6 shows the Euler approximation of the
coordinated turn model.

2.6 Example: Euler Sampling of the Coordinated Turn Model


Consider the continuous-time coordinate turn model in Example 2.4. The solu-
tion of (2.21) is
T
xW W i
Ti W ,k+1 = xTi W ,k + T vk cos ψTi ,k (2.22a)
T
yW W
Ti W ,k+1 = yTi W ,k + T vk i sin ψTi ,k (2.22b)
Ti T
vk+1 = vk i (2.22c)
ψTi ,k+1 = ψTi ,k + T ψ̇Ti ,k (2.22d)
ψ̇Ti ,k+1 = ψ̇Ti ,k . (2.22e)

Sampling of linear systems is thoroughly described by Rugh (1996). Moreover,


different options to sample and linearize non-linear continuous-time systems are
described by Gustafsson (2000). The linearization problem is treated in Chap-
ter 3, in a discussion of approximative model based filters such as the extended
Kalman filter.
28 2 Models of Dynamic Systems

2.3 Linear State Space Model


An important special case of the general state space model (2.1) is the linear Gaus-
sian state space model, where f and h are linear functions and the noise is Gaus-
sian,
xk+1 = Fk (θ)xk + Gku (θ)uk + Gkw wk (θ), (2.23a)
yk = Hk (θ)xk + Hku (θ)uk + ek (θ), (2.23b)
where wk ∼ N (0, Qk ) and ek ∼ N (0, Rk ). Note that the single track model (2.4) is
linear in the state variables, as shown in Example 2.7.

2.7 Example: Linearized Single Track Model


The front wheel angle is usually quite small at higher velocities and the assump-
tions cos δf ≈ 1, tan δf ≈ sin δf ≈ δf therefore applies. The continuous-time
single track model (2.4) may first be discretized using Euler sampling and can
then be written on the linear form (2.23) according to
2 2
 1 − T Cαf lf +Cαr lr
 
−Cαf lf +Cαr lr  C l 
I v T I
  αf f 
ẋk+1 = 
 zz t zz x +  I  δ + w , (2.24a)
 k  Czz

 f ,k k
−T − T Cαf lf2−Cαr lr 1 − T Cαf +Cαr 

αf 
vk m mvk mv
 
1 0
" #
  0
yk =  −Cαf lf +Cαr lr Cαf +Cαr  xk + Cαf δf ,k + ek . (2.24b)
mv − m m
k

The model is linear in the input δf ,k . However, the input vk is implicitly modeled
in the matrices Fk (vk , θk ), Gku (vk , θk ) and Hk (vk , θk ).

Linear state space models and linear system theory in general are thoroughly de-
scribed by Rugh (1996) and Kailath (1980).

2.4 Nonlinear State Space Model with Additive Noise


A special case of the general state space model (2.1) is given by assuming that the
noise enters additively and the input signals are subsumed in the time-varying
dynamics, which leads to the form
xk+1 = fk (xk , θk ) + wk , (2.25a)
yt = hk (xk , θk ) + ek . (2.25b)

In Example 1.1 an ego vehicle model was introduced, where the steering wheel
angle and the vehicle velocity were modeled as deterministic input signals. This
consideration can be motivated by claiming that the driver controls the vehicle’s
lateral movement with the steering wheel and the longitudinal movement with
the throttle and brake pedals. Furthermore, the steering wheel angle and the
velocity are measured with less noise than the other measurement signals, and
2.4 Nonlinear State Space Model with Additive Noise 29

they are often pre-processed to improve the accuracy and remove bias. With
these arguments the resulting model, given in Example 2.1, may be employed.
The model is in some sense simpler than the case when these two signals are
assumed to be stochastic measurements, as shown in Example 2.8.

2.8 Example: Single Track Model without Deterministic Input Signals


In classical signal processing it is uncommon to allow deterministic input signals,
at least not if these are measured by sensors. The input signals in Example 1.1
should instead be modeled as stochastic measurements. Hence, the measurement
vector and the state vector are augmented and the system is remodeled. One
example is given by the state space model
!
C l 2 +C l 2

 ψ̇E,k + T − αf If v αr r ψ̇E,k + −Cαf lIf +Cαr lr βk + CαfIlf δf ,k 
ψ̇E,k+1  

zz k zz zz 
 β     
 k+1  =  βk + T − 1 + Cαf lf2−Cαr lr ψ̇E,k − Cαf +Cαr βk + Cαf δf ,k  + wk ,
 
 δf ,k+1   vk m mvk mvk 
   
vk+1 

 δ f ,k


vk
(2.26a)
  ψ̇ + eψ̇,k

 ψ̇t   −C l +C lE,k
 
a   αf f αr r Cαf +Cαr 
 y,k  =  mvk ψ̇ E,k − m β k 
 + ek ,
 δs,k   (2.26b)
   hδf (ψ̇k , βk , δf ,k , θ) 

vk 
v

x,t

where T is the sample time. The first two rows of the process and measurement
models are the discretized versions of the model given in (2.4). In the second row
in the measurement model the simplification cos δf ≈ 1 and sin δf ≈ 0 is made
to not let the measurement model be dependent on the measurement δf . The
third measurement signal is the steering wheel angle δs , but the third state is the
front wheel angle δf . A possible measurement model hδf will be discussed in
Example 3.1. Finally, a random walk is assumed for the front wheel angle δf and
the velocity v in the process model.

Another way to represent the state space model is given by considering the prob-
ability density function (pdf) of different signals or state variables of a system.
The transition density p(xk+1 |xk ) models the dynamics of the system and if the
process noise is assumed additive, the transition model is given by
p(xk+1 |xk ) = pw (xk+1 − f (xk , uk , θk )), (2.27)
where pw denotes the pdf of the process noise w. The density describes the state
at time k + 1 given that information about the state at the previous time step k is
known. A fundamental property of the process model is the Markov property,
p(xk+1 |x1 , . . . , xk ) = p(xk+1 |xk ). (2.28)
This means that the state of the system at time k contains all necessary informa-
tion about the past, which is needed to predict the future behavior of the system.
30 2 Models of Dynamic Systems

Furthermore, if the measurement noise is assumed additive then the likelihood


function, which describes the measurement model, is given by
p(yk |xk ) = pe (yk − h(xk , uk , θk )), (2.29)
where pe denotes the pdf of the sensor noise e. The measurement zk received by
the sensor are described by the likelihood function given that information about
the state xk at the same time step k is known. The two density functions in (2.27)
and (2.29) are often referred to as a hidden Markov model (hmm) according to
xk+1 ∼ p(xk+1 |xk ), (2.30a)
yk ∼ p(yk |xk ), (2.30b)
since xk is not directly visible in yk . It is a statistical model where one Markov pro-
cess, that represents the system, is observed through another stochastic process,
the measurement model.
Estimation Theory
3
This thesis is concerned with estimation problems, i.e. given measurements y the
aim is to estimate the parameter θ or the state x in (1.1). Both problems rely
on the same theoretical basis and the same algorithms can be used. The parame-
ter estimation problem is a part of the system identification process, which also
includes the derivation of the model structure, discussed in the previous chap-
ter. The state estimation problem utilizes the model and its parameters to solve
for the states. When estimating x it is assumed that θ is known and vice versa.
The parameter is estimated in advance if θ is time invariant or in parallel with
the state estimation problem if θ is assumed to be time varying. Example 3.1
illustrates how the states and parameters may be estimated.
3.1 Example: Parameter and State Estimation
Consider the single track model, which was introduced in Example 1.1 and the
equations were given in Example 2.1. The front wheel angle δf is considered to
be a state variable in Example 2.8 and the steering wheel angle δs is treated as a
measurement. The measurement equation is in its simplest form a constant ratio
given by
δs,k = h(δf ,k , θ) = is δf ,k . (3.1)
The parameter θ = is is assumed to be time invariant. The state δf must be
known in order to identify the parameter θ. Usually the parameter is estimated
off-line in advance using a test rig, where the front wheel angle is measured using
highly accurate external sensors. The parameter is then used within the model in
order to estimate the states on-line while driving.
The tire parameter Cα is assumed to change with weather and road conditions,
hence it is a time varying parameter. It has to be identified on-line at time k
using the state estimates from the previous time step k − 1, which in turn were

31
32 3 Estimation Theory

estimated using the parameter estimate from time step k − 1.

For various reasons some systems are only modeled by a likelihood function. Of-
ten these systems are static and there exists no Markov transition density. How-
ever, most systems in this thesis are modeled using both a prediction and a like-
lihood function. In system identification, the model parameter is estimated with-
out physically describing the parameter’s time dependency, hence static estima-
tion theory is used. The state can be estimated in more or less the same way.
However, the process model (1.1a) is often given and its time transition informa-
tion is exploited to further improve the state estimate.
The origins of the estimation research field can be traced back to the work by
Gauss in 1795 on least squares (Abdulle and Wanner, 2002) and Bayes (1763)
on conditional probabilities. Bayes introduced an important theorem which has
come to be referred to as Bayes’ theorem,
p(y|x, θ)p(x, θ)
p(x, θ|y) = , (3.2)
p(y)
with which it is possible to calculate the posterior probability p(x, θ|y) given a
prior probability p(x, θ) and the likelihood function p(y|x, θ). Note that both
the measurement and the state or parameter are treated as random variables.
Another view of the estimation problem was introduced by Fisher (1922), who
claimed that the probability of an estimate should be seen as a relative frequency
of the state or parameter, given data from long-run experiments. Fisher also
treats the measurement as a random variable. The main difference to Bayes’ ap-
proach is that in Fisher’s approach there is a true state or parameter which is
treated as deterministic, but unknown. To accentuate the different views, the
likelihood is often written using `(x, θ) to emphasize that the likelihood is re-
garded as a function of the state x and the parameter θ.
After this brief historical background, the remainder of this chapter is outlined
as follows. In Section 3.1, static estimation methods based on both Fishers and
Bayes theories, are discussed. These methods can be used for both state and
parameter estimation. In Section 3.2, dynamic estimation methods are discussed.
These methods are, within the scope of this thesis, only used for state estimation
and are based solely on Bayes’ theories.

3.1 Static Estimation Theory


The general estimation problem consists of finding the estimates x̂ and θ̂ that
minimize a given loss function V (x, θ; y). This problem is separated into a pa-
rameter estimation problem and a state estimation problem according to
b = arg min V (θ; x, y),
θ (3.3a)
θ
x = arg min V (x; θ, y).
b (3.3b)
x
3.1 Static Estimation Theory 33

How to separate a typical estimation problem into these two parts is shown Ex-
ample 3.2.
General estimation techniques are covered by most textbooks on this topic, e.g.,
Kay (1993); Kailath et al. (2000); Ljung (1999). There are many estimation meth-
ods available, however, in this section the focus is on the methods used in Part II
of this thesis.
3.2 Example: Parameter and State Estimation
Consider the linear single track model in Example 2.7. Suppose that the state
variables are measured with external and highly accurate sensors. The yaw rate
is measured with an extra imu and the body side slip angle β is measured with
a so called Correvit® sensor, which uses correlation technology to compute the
optical flow. The vehicle in Figure 2 on Page 297 is equipped with this type of
sensor at the front. Now, the parameter θ can be estimated, according to (3.3a).
This approach has been used to find some of the vehicle parameters in Papers B
and G.
Conversely, if θ is known and y is measured, the state variables x can be estimated
using (3.3b).

This section covers estimation problems without any process model f ( · ), where
a set of measurements is related to a parameter only via the measurement model
h( · ). Furthermore, only an important and special case where the measurement
model is linear in x is considered. The linear measurement model was given
in (2.23b) and is repeated here for convenience
yk = Hk (θ)xk + ek . (3.4)

3.1.1 Least Squares Estimator


The least squares (ls) estimate is defined as the solution to the optimization prob-
lem, where the squared errors between the predicted measurements and the ac-
tual measurements are minimized according to,
k
X
x̂LS
k = arg min ||yτ − Hτ (θ)xτ ||22 . (3.5)
x τ=1
The solution for the linear case is given in Algorithm 1.
If the measurement covariance R = Cov (e) is known, or in practice at least as-
sumed to be known, then the weighted least squares (wls) estimate is given by
the optimization problem
k
X
x̂W
k
LS
= arg min (yτ − Hτ (θ)xτ )T R−1
τ (yτ − Hτ (θ)xτ ). (3.6)
x τ=1
The solution for the linear case is given in Algorithm 2, and Example 3.3 illus-
trates how the single track vehicle model can be reformulated to estimate the
34 3 Estimation Theory

Algorithm 1 Least Squares


The least squares estimate and its covariance are given by
 k −1 k
X  X  −1
x̂LS
k =

 H T
τ H τ


 H T
τ y τ = H T
H HTY , (3.7a)
τ=1 τ=1
Cov (x̂LS ) = (H T H)−1 (H T RH)(H T H)−1 , P LS . (3.7b)
The last equality is the batch solution, where H and Y are augmented mea-
surement models and measurement vectors, respectively. Furthermore, the
measurement noises Rτ = Cov (eτ ) form the main diagonal of R according to
R = diag(R1 , . . . , Rk ).

Algorithm 2 Weighted Least Squares


The weighted least squares estimator and its covariance matrix are given by
 t −1 k
X T −1  X T −1  −1
W LS
x̂k =  Hτ Rτ Hτ  Hτ Rτ yτ = H T R−1 H H T R−1 Y , (3.8a)
τ=1 τ=1
W LS T −1 −1 W LS
Cov (x̂ ) = (H R H) , P , (3.8b)
where the weighting matrix is the noise covariance R.

parameters using the wls.

Another example, where both the ls and the wls estimators are applied, is given
in Paper C. The left and right borders of a road are modeled by polynomials and
the coefficients are the parameters which are estimated given a batch of measure-
ments from a radar.

3.3 Example: Parameter and State Estimation


Consider the linear single track model in Example 2.7 and the separation of the
parameter and the state estimation problems in Example 3.2. Suppose that the
vehicle’s mass m and the dimensions lf and lr are known. Furthermore, suppose
that the state variable x may be measured as described in Example 3.2. Consider
the measurement equation (2.24b); the parameter estimation problem can now
be formulated in the form (3.4) with
" #
C
y = H(x, u, lf , lr , m) αf + e, (3.9)
Cαr
and the parameters Cαf , Cαr can be solved for using e.g., wls in (3.6). Further-
more, the inverse of the moment of inertia 1/Izz may be estimated off-line by
writing the process model (2.24a) in the form (3.4) according to
1
xt+1 = H(xt , u, lv , lf , m, Cαf , Cαr ) · + w. (3.10)
Izz
3.2 Filter Theory 35

3.1.2 Probabilistic Point Estimates


The maximum likelihood (ml) estimate, first introduced by Fisher (1912, 1922),
is defined by
x̂ML
k = arg max p(y1:k |xk ). (3.11)
xk

Put into words, the estimate is chosen to be the parameter most likely to produce
the obtained measurements.

The posterior p(xk |y1:k ), when xk is random, contains all known information
about the state of the target at time k. The maximum a posteriori (map) esti-
mator is defined by
x̂MAP
k = arg max p(xk |y1:k ) = arg max p(y1:k |xk )p(xk ), (3.12)
xk xk

or put in words, find the most likely estimate of the parameter given the measure-
ments y1:k . Bayes’ theorem (3.2) and the fact that the maximization is performed
over xk is used in the second equality of (3.12). The ml estimate is for instance
used in Paper E.

3.2 Filter Theory


The topic of this section is recursive state estimation based on dynamic models.
The iteration process of the state space estimation was briefly described in words
in Section 1.5. The state estimation theory is influenced by the Bayesian view,
which implies that the solution to the estimation problem is provided by the
filtering pdf p(xk |y1:k ). The introduction to this section will be rather general
using the model defined in (2.30). Bayes’ theorem was introduced in (3.2) and is
used to derive the recursive Bayes filter equations
Z
p(xk+1 |y1:k ) = p(xk+1 |xk )p(xk |y1:k )dxk , (3.13a)

p(yk |xk )p(xk |y1:k−1 )


p(xk |y1:k ) = , (3.13b)
p(yk |y1:k−1 )
with the denominator
Z
p(yk |y1:k−1 ) = p(yk |xk )p(xk |y1:k−1 )dxk . (3.13c)

These equations describe the time evolution


· · · → xk|k → xk+1|k → xk+1|k+1 → · · · (3.14)
of the random state vector x. The Bayes posterior density function p(xk |y1:k ) con-
ditioned on the time sequence y1:k = {y1 , . . . , yk } of measurements accumulated
at time k is the probability density function of xk given measurements y1:k . The
probability density function p(xk+1 |y1:k ) is the time prediction of the posterior
p(xk |y1:k ) to the time step of the next measurement yk+1 . Note that the Bayes
36 3 Estimation Theory

normalization factor given by (3.13c) is independent of x. In practice the numer-


ator of (3.13b) is calculated and then simply normalized, since the integral of the
posterior density function must be equal to one. The single target Bayes filter
recursion is illustrated with a flow chart as follows
prediction update
· · · −→ pk|k (xk |z1:k ) −→ pk+1|k (xk+1 |z1:k ) −→ pk+1|k+1 (xk+1 |z1:k+1 ) −→ · · ·

↑ ↑

Transition density Likelihood function


p(xk+1 |xk ) p(zk+1 |xk+1 )
↑ ↑
Motion model Measurement model
xk+1 = f (xk ) + wk zk+1 = h(xk+1 ) + ek+1
After the update step is made the algorithm continues recursively, i.e., a new
prediction is performed and thereafter a update or correction with the likelihood
of the new measurements.

If p(yk |xk ), p(xk+1 |xk ) and p(xk ) are Gaussian and their corresponding process
and senor models are linear, as in (2.23), then (3.13a) and (3.13b) reduce to the
Kalman filter prediction and measurement update, respectively. The Kalman
filter is treated in Section 3.2.1. In contrast, if p(yk |xk ), p(xk+1 |xk ) and p(xk )
can be approximated by a Gaussian and their corresponding process and sensor
models are nonlinear (2.1), several approximations of (3.13a) and (3.13b) exist.
The two most common filters are the extended Kalman Filter and the unscented
Kalman filter, which are outlined in Sections 3.2.2 and 3.2.3, respectively. Other
methods, including methods that approximate other density functions than Gaus-
sian, are neatly covered by Hendeby (2008) and Schön (2006). The most pop-
ular approaches are the particle filter, which is covered in Section 3.2.4, and
the marginalized particle filter, see e.g., Arulampalam et al. (2002); Cappe et al.
(2007); Djuric et al. (2003); Karlsson (2005); Schön et al. (2005).

3.2.1 The Kalman Filter

The linear state space representation subject to Gaussian noise, which was given
in (2.23), is the simplest special case when it comes to state estimation. The model
is repeated here for convenience;
xk+1 = Fk (θ)xk + Gku (θ)uk + Gkw wk , wk ∼ N (0, Qk ), (3.15a)
yk = Hk (θ)xk + Hku (θ)uk + ek , ek ∼ N (0, Rk ). (3.15b)
The linear model (3.15) has two important properties. All density functions in-
volved in the model and state estimation are Gaussian and a Gaussian density
function is completely parametrized by the mean and the covariance, i.e. the first
and second order moment. Hence, the Bayesian recursion (3.13) is simplified
into only propagating the mean and covariance of the involved probability den-
3.2 Filter Theory 37

Algorithm 3 Kalman Filter


Consider the linear state space model (3.15). The Kalman filter is given by the
two following steps.
Prediction
u
x̂k|k−1 = Fk−1 x̂k−1|k−1 + Gk−1 uk−1 (3.16a)
T w wT
Pk|k−1 = Fk−1 Pk−1|k−1 Fk−1 + Gk−1 Qk−1 Gk−1 (3.16b)
Measurement Update
Kk = Pk|k−1 HkT (Hk Pk|k−1 HkT + Rk )−1 (3.17a)
x̂k|k = x̂k|k−1 + Kk (yk − Hk x̂k|k−1 − Hku uk ) (3.17b)
Pk|k = (I − Kk Hk )Pk|k−1 (3.17c)

sity functions, as illustrated in the following flow chart


prediction update
· · · −→ N (xk|k , Pk|k ) −→ N (xk+1|k , Pk+1|k ) −→ N (xk+1|k+1 , Pk+1|k+1 ) −→ · · ·

↑ ↑

Transition density Likelihood function


N (xk+1 ; Fxk , Qk ) N (zk+1 ; Hxk+1 , Rk+1 )
↑ ↑
Motion model Measurement model
xk+1 = Fxk + wk zk+1 = Hxk+1 + ek+1
The most well known estimation algorithm is the Kalman Filter (kf), derived
by Kalman (1960), and shown in Algorithm 3. Example 3.4 shows how the single
track vehicle model, introduced in Example 1.1, may be rewritten to be used with
the Kalman filter, which in turn is used to estimate the states.

3.4 Example: Linearized Single Track Model


The single track vehicle model was introduced in Example 1.1 and the model
equations were posed in Example 2.1. The process model (2.4a) and the measure-
ment model (2.4b) are linear in the state variables and can be written in the form
" # " #
ψ̇k+1 ψ̇
= Fk (vx , θ) k + Gku (vx , θ)δf + w, w ∼ N (0, Q), (3.18a)
βk+1 βk
" m# " #
ψ̇k ψ̇
= Hk (vx , θ) k + Hku (θ)δf + e, e ∼ N (0, R), (3.18b)
ay,k βk
as shown in Example 2.7. Since the input vx is present in Fk , Gku and Hk , these
matrices must be recalculated at each time step before being used in the Kalman
filter (Algorithm 3) to estimate the states.
38 3 Estimation Theory

3.2.2 The Extended Kalman Filter

In general, most complex automotive systems tend to be nonlinear. When it


comes to solving state estimation problems in a sensor fusion framework, non-
linear models are commonly applied. This holds also for the work presented in
this thesis, but the problems are restricted by the assumption that the process and
measurement noise are Gaussian. The most common representation of nonlinear
systems is the state space model given in (1.1), repeated here for convenience;
xk+1 = fk (xk , uk , wk , θ), wk ∼ N (0, Qk ), (3.19a)
yk = hk (xk , uk , ek , θ), ek ∼ N (0, Rk ). (3.19b)
The basic idea underlying the extended Kalman filter (ekf) is to approximate the
nonlinear model (3.19) by a local linear model and apply the Kalman filter to this
approximation. This local linear approximation is obtained by computing a first
order Taylor expansion around the current estimate. The result is the extended
Kalman filter, which is given in Algorithm 4. Early practical applications and
examples of the ekf are described in the works by Smith et al. (1962); Schmidt
(1966). An early reference where the ekf is treated is Jazwinski (1970), other
standard references are Anderson and Moore (1979); Kailath et al. (2000) .

The linearization used in the ekf assumes that all second and higher order terms
in the Taylor expansion are negligible. This is certainly true for many systems,
but for some systems this assumption can significantly degrade the estimation
performance. Higher order ekf are discussed by Roth and Gustafsson (2011); Bar-
Shalom and Fortmann (1988); Gustafsson (2000). This problem will be revisited
in the next section.

Algorithm 4 Extended Kalman Filter


Consider the state space model (3.19). The extended Kalman filter is given by the
two following steps.
Prediction
x̂k|k−1 = fk−1 (x̂k−1|k−1 , uk−1 , 0, θ) (3.20a)
T T
Pk|k−1 = Fk−1 Pk−1|k−1 Fk−1 + Gk−1 Qk−1 Gk−1 (3.20b)
where
∂fk (xk , uk , 0, θ) ∂fk (x̂k|k , uk , wk , θ)
Fk = Gk = (3.20c)
∂xk xk =x̂k|k ∂wt wk =0
Measurement Update
Kk = Pk|k−1 HkT (Hk Pk|k−1 HkT + Rk )−1 (3.21a)
x̂k|k = x̂k|k−1 + Kk (yk − hk (x̂k|k−1 , ut , 0, θ)) (3.21b)
Pk|k = (I − Kk Hk )Pk|k−1 (3.21c)
where
∂hk (xk , uk , 0, θ)
Hk = (3.21d)
∂xk xk =x̂k|k−1
3.2 Filter Theory 39

3.2.3 The Unscented Kalman Filter


The ekf is sufficient for many applications. However, to use an ekf the gradi-
ents of fk ( · ) and hk ( · ) must be calculated, which in some cases is either hard
to do analytically or computationally expensive to do numerically. An alterna-
tive approach, called the unscented Kalman filter (ukf) was proposed by Julier
et al. (1995); Julier and Uhlmann (1997) and further refined by e.g., Julier and
Uhlmann (2002, 2004); Julier (2002). Instead of linearizing fk ( · ) and hk ( · ), the
unscented transform (ut) is used to approximate the moments of the predicted
and updated states. Thereby the ukf to some extent also considers the second
order terms of the models, which is not done by the ekf.

The principle of the unscented transform is to carefully and deterministically


select a set of points, called sigma points, of the initial stochastic variable x, such
that their mean and covariance are equal to those of x. Then the sigma points are
passed through the non-linear function and based on the output the resulting
mean and covariance are derived. In case the process noise and measurement
noise are not additive, sigma points are selected from an augmented state space,
which includes the state x, the process noise w and the measurement noise e in
one augmented state vector

 x̂k|k 

a  E (w ) 
x̂k|k =  k   , (3.22)
E (ek+1 )

with dimension na = nx + nw + ne and the corresponding covariance matrix


Pk|k 0 0 
 
a
Pk|k =  0 Qk 0  . (3.23)

0 0 Rk+1
 

If the noise is additive, then the noise covariances can be added directly to the
estimated covariances of the non-augmented sigma points.

There exist many possibilities to choose the sigma points, a thorough discussion
about different alternatives is presented by Julier and Uhlmann (2004). In the
present work only the standard form is reproduced. The basic principle is to
choose one sigma point in the mean of xa and 2na points symmetrically on a
given contour, described by the state covariance P a . The sigma points χ i and the
associated weights w(i) are chosen as
χ(0) = x̂a w(0) = w(0) (3.24a)
1 − w(0)
r !
na
χ(i) = χ(0) + Pa w(i) = (3.24b)
1 − w(0) i 2na
1 − w(0)
r !
(i+na ) (0) na
χ =χ − Pa w(i+na ) = (3.24c)
1 − w(0) i 2na

for i = 1, . . . , na , where ( A)i is the i th column of any matrix B, such that A = BBT .
The augmented state vector makes it possible to propagate and estimate nonlin-
40 3 Estimation Theory

Algorithm 5 Unscented Kalman Filter


Consider the state space model (3.19). The unscented Kalman filter is given by
the following steps, which are iterated in the filter.
Choose sigma points according to (3.24)
Prediction
2na
x,(i)
X
x̂k|k−1 = w(i) χk|k−1 (3.25a)
i=0
2na   T
x,(i) x,(i)
X
Pk|k−1 = w(i) χk|k−1 − x̂k|k−1 χk|k−1 − x̂k|k−1 (3.25b)
i=0
where  
x,(i) x,(i) w,(i)
χk|k−1 = fk−1 χk−1|k−1 , uk−1 , χk−1|k−1 , θ (3.25c)
Measurement Update
−1
x̂k|k = x̂k|k−1 + Pxy Pyy (yk − ŷk|k−1 ) (3.26a)
−1 T
Pk|k = Pk|k−1 − Pxy Pyy Pxy (3.26b)
where  
(i) x,(i) e,(i)
yk|k−1 = hk χk|k−1 , uk , χk|k−1 , θ (3.26c)
2na
(i)
X
ŷk|k−1 = w(i) yk|k−1 (3.26d)
i=0
2n
Xa   T
(i) (i)
Pyy = w(i) yk|k−1 − ŷk|k−1 yk|k−1 − ŷk|k−1 (3.26e)
i=0
2n
Xa   T
x,(i) (i)
Pxy = w(i) χk|k−1 − x̂k|k−1 yk|k−1 − ŷk|k−1 (3.26f)
i=0

ear influences that the process noise and the measurement noise have on the state
vector and the measurement vector, respectively. The weight on the mean w(0) is
used for tuning and according to Julier and Uhlmann (2004) preferable proper-
ties for Gaussian density functions are obtained by choosing w(0) = 1 − n3a . After
the sigma points have been acquired, the augmented state vector can be parti-
tioned according to
 x 
 χk|k 
a
χk|k =  χkw  . (3.24d)
 
 e 
χk+1
The rest of the ukf is summarized in Algorithm 5.

An advantage of the ukf, compared to the ekf, is that the second order bias cor-
rection term is implicitly incorporated in the mean estimate. Example 3.5 shows
an important problem where the second order term should not be neglected.
3.2 Filter Theory 41

3.5 Example: Tracked Radar Object


The radar target tracking problem was introduced in Example 1.2 and the model
was defined in Example 2.4. The sensor model converts the Cartesian state vari-
ables to polar measurements. This is one of the most important and commonly
used transformations for sensors measuring range and azimuth angle. Usually
the azimuth angle error of these type of sensors is significantly larger than the
range error. This also holds for the sensors used in this thesis.

Let the sensor be located at the origin and the target at (x, y) = (0, 1) in this simple,
and commonly used example (Julier and Uhlmann, 2004). Measurements may be
simulated by adding Gaussian noise to the actual polar value (r, ψ) = (1, π/2)
of the target localization. A plot of several hundred state estimates, produced
in a Monte Carlo simulation, forms a banana shaped arc around the true value
(x, y) = (0, 1), as shown in Figure 3.1. The azimuth error causes this band of
Cartesian points to be stretched around the circumference of a circle, with the
result that the mean of these points lies somewhat closer to the origin than the
point (0, 1). In the figure it is clearly shown that that the ut estimate (×) lies
close to the mean of the measurements (◦). Furthermore, it is shown that the
linearized state estimate (+) produced by the ekf is biased and the variance in
the y component is underestimated.

As a result of the linearization in the ekf, the second order terms are neglected,
which produces a bias error in the mean as shown in Example 3.5. In Julier and
Uhlmann (2004) it is shown how the ut in some cases calculates the projected
mean and covariance correctly to the second order terms.

The unscented Kalman filter is applied in Papers C and D.

3.2.4 The Particle Filter


Numerical approximations of distribution function used in the Bayes equations
(3.13) are necessary since analytical solutions to the filtering problems do not
exist in most cases. A stochastic method called sequential Monte Carlo (smc)
or particle filter (pf), which is based on Monte Carlo integration, has become
popular during the last years. It was introduced by Gordon et al. (1993) and
theory is well described by e.g., Doucet et al. (2001, 2000); Ristic et al. (2004);
Gustafsson (2010). The idea is to approximate the pdf with a number of samples,
(i) (i)
called particles, {xk }N N
i=1 with associated weights {wk|k−1 }i=1 such that
N
(i) (i)
X
p(xk |y1:k−1 ) ≈ wk|k−1 δ(xk − xk ). (3.27)
i=1
The particles are selected to be identically and independently distributed sam-
(i)
ples from a so called proposal distribution q(xk+1 |xk , y1:k+1 ) and the weights are
called importance weights. Using Monte Carlo integration the mean value of the
42 3 Estimation Theory

0.8

0.6
y

0.4

0.2

0 Sensor

−0.2
−0.6 −0.4 −0.2 0 0.2 0.4 0.6
x
(a)

1.02

0.98

0.96
y

0.94

0.92

0.9
−0.3 −0.2 −0.1 0 0.1 0.2 0.3
x
(b)

Figure 3.1: A Monte Carlo simulation of the problem in Example 3.5 is


shown in Figure (a). The sensor, for example a radar, is in the position (0, 0)
and the true position of the target is in the position (0, 1). The mean of the
measurements is at ◦ and the uncertainty ellipse is solid. The linearized
mean is at + and its ellipse is dashed. The UT mean is at × and its uncer-
tainty ellipse is dotted. Figure (b) is a zoom. Note that the scaling in the x
and the y axis are different.
3.2 Filter Theory 43

Algorithm 6 Particle Filter


(i) (i)1
1: Initiate {x0 }N N
i=1 ∼ px0 and {w0 }i=1 = N for i = 1, . . . , N
2: k=0
3: loop
4: for i = 1 to N do
(i) (i)
5: Time Update: generate the sample, xk|k−1 ∼ q(xk |xk−1|k−1 , yk )
6: Update importance weights
(i) (i)
(i) (i) p(xk|k−1 |xk−1|k−1 )
wk|k−1 = wt−1|t−1 (i) (i)
q(xk|k−1 |xk−1|k−1 , yk )
(i) (i)
{Note, if q(xk |xk−1|k−1 , y1:k ) = p(xk |xk−1|k−1 ) this simplifies to
(i) (i)
wk|k−1 = wt−1|t−1 }
(i) (i) (i)
7: Measurement update: w̄k|k = p(yk |xk|k )wk|k−1
8: end for
(i)
(i) w̄k|k
9: Normalize weights wk|k = P (j) for i = 1, . . . , N
j w̄k|k
10: Resample
11: end loop

distribution can easily be calculated according to


N
(i) (i)
X
x̂k|k−1 = E(xk ) ≈ wk|k−1 xk . (3.28)
i=1
The pf is described in Algorithm 6.
The resampling in Line 10 rejuvenates the particles used to represent the pdf.
There exist different methods to resample, and it will not be discussed further
here; the reader is referred to Hendeby (2008); Schön (2006) instead.
The pf is used in Paper C to track extended targets. In Paper G, the pf is used
since the mean and covariance of the process noise are unknown and must be
estimated together with the states. In that case the particles are drawn from a
Student-t distribution.
Target Tracking
4
The process of estimating over time the location and characteristics of one or
more objects of interest, denoted targets, using one or several sensors is referred
to as target tracking. The aim of the tracking algorithm is to detect the true tar-
gets, collect observations that originate from them and to estimate quantities of
interest, such as target position, velocity and other object characteristics. Typical
sensors for these applications, such as radar, laser and vision, report noisy mea-
surements. Besides the fact that the measurements are noisy another difficulty
for the algorithm is that the sensor measurements not only originate from the
targets, but also from clutter or spurious detections. The target tracking area is
well covered in the literature and recommended books are Blackman and Popoli
(1999); Bar-Shalom and Fortmann (1988); Bar-Shalom et al. (2001); Ristic et al.
(2004); Liggins et al. (2009).

This chapter begins by putting the filter, introduced in the last chapter about
estimation theory, into the target tracking framework. In Section 4.1 the focus
is on the filter and therefore only the single target case is considered. An au-
tomotive target tracking system, must obviously be able to handle multiple tar-
gets, because potentially more than one vehicle are surrounding the own vehicle.
Therefore the chapter is continued by discussing the extension to multi target
tracking in Section 4.2. When reading the first two sections, compare with the
block diagram of the target tracking framework illustrated in the introduction in
Figure 1.4. Finally, in Section 4.3 the system is further enlarged to also be able to
track the shape and size of the the targets. This research area is called extended
target tracking.

45
46 4 Target Tracking

observation space

zk+1
zk

state space

target motion
xk+1
xk

Figure 4.1: Single target tracking problem. The upper layer is the observa-
tion space with the measurements and the lower layer is the state space with
the state variables.

4.1 Single Target Tracking

The complete information required to describe the system at time k is summa-


rized in the state vector xk ∈ Rnx . At each time step the tracking algorithm is
supplied with a measurement vector z ∈ Rnz . The aim of the filter, as part of a
target tracking algorithm, is to find the posterior density p(xk |z1:k ), given a se-
quence of noisy measurement vectors up to and including time k, collected in
z1:k = {z1 , z2 , . . . zk }. (4.1)
From the posterior it is possible to calculate an estimate x̂k of the state xk . Typi-
cally, the targets or the sensors are moving and the system changes over time. The
motion of the targets in time is modeled with a Markov density, given in (2.27),
and the pdf is specified with a motion model (2.1a). The measurement zk re-
ceived by the sensor is modeled by a likelihood function (2.29), which is speci-
fied by a measurement model (2.1b). Both the process and measurement noise
variables are assumed to be white and independent. Figure 4.1 illustrates the
filtering problem. The upper layer represents the observation space, where one
measurement zk is observed at each time step k and another measurement is ob-
served at time step k + 1. The lower layer represents the state space, with the
values of the state vector xk at each time step.

The single target Bayes filter recursion consists of two steps, the prediction step
and the correction step. Since all variables are assumed to be unknown stochastic
variables in the Bayesian framework it is necessary to describe a priori informa-
tion about the state in the form of a prior density p0 (x0 ). The recursion was
illustrated with a flow chart in Section 3.2. A special case is the Kalman filter,
where the densities are assumed to be Gaussian.
4.2 Extension to Multitarget Tracking 47

4.2 Extension to Multitarget Tracking


In this chapter until now only single targets have been considered. This scenario
is quite unusual, in most target tracking applications more than one target may
appear and must be handled by the target tracking system. A multi target track-
ing system must not only provide estimates of the state variables
 
(1) (2) (N )
xk , xk , . . . , xk x (4.2)

it must also estimate the number of targets Nx , and find likely associations of the
measurements to the tracks. The latter uncertainty is known as the data associa-
tion problem.
A typical multi target system is shown in Figure 4.2, where several targets exist
at each time step. Aspects to consider in a multitarget tracking filter are listed
below:
• The number of targets Nx changes over time, for instance there are probably
more vehicles on the roads during rush hour than in the night.
• New targets appear and old targets disappear, as they enter or leave the
field of view of the sensor. Compare with Figure 4.2, where target x(4) ap-
pears at time k + 1.
• The system fails to detect targets because they are temporarily occluded or
because the sensor data is very noisy.
• The sensors receive a set of spurious measurements, also denoted clutter,
stemming from false reflections or from other type of objects in the field
of view, etc. In Figure 4.2 there are more observations than targets at each
time step.
In this section first the data association problem is treated in Section 4.2.1 and
thereafter an overview of the track management, which takes care of the prob-
lems in the list above is given in Section 4.2.2.

4.2.1 Data Association


This section would not be needed if only the state variables of the ego vehicle,
introduced in Example 1.1 are estimated, because in that case it is obvious how
the measurements are associated with the state variables. In the object tracking
problem, introduced in Example 1.2, it is no longer obvious which measurement
should update which track. There are many methods available for finding likely
measurement-to-track associations, i.e., for solving the data association problem,
see e.g., Bar-Shalom and Fortmann (1988); Blackman and Popoli (1999). However,
the task is seldom easy, due to noisy measurements, multiple reflections on each
target and erroneous detections caused by spurious reflections.
The first step in the data association process is called gating. Gates are con-
(i)
structed around the predicted measurement ẑk|k−1 of each track i to eliminate
48 4 Target Tracking

(3) (3) observation space


zk zk+1
(4) (4)
zk (2) zk+1
zk+1
(2) (1)
zk zk+1 (6)
zk+1
(1) (5)
zk zk+1 state space
target motion (2)
xk+1 (3)
xk+1
(2)
xk (3)
xk
(1)
(1)
xk xk+1 (4)
xk+1

Figure 4.2: Multi target tracking system, several state vectors are present at
each time step and producing more than one observation. Note that there
are more observations than targets. A new target x(4) appears at time k + 1.

unlikely pairings and thereby to limit the number of measurement-to-track asso-


ciations. This reduces the number of measurements that are examined by the data
association algorithm and reduces the computational load. The residual, which is
(j)
also called innovation, between a measurement zk and a predicted measurement
(i)
ẑk|k−1 is
(i,j) (j) (i)
z̃k|k−1 = zk − ẑk|k−1 , (4.3)
and it is assumed Gaussian distributed according to
(i,j) (i)
z̃k|k−1 ∼ N (0, Sk ), (4.4)
(i)
where Sk is the innovation covariance. Further, under this assumption, the statis-
tical distance between the measurement and the predicted measurement, given
by the norm of the residuals according to
2 (i,j) −1 (i,j)
di,j = (z̃k|k−1 )T Si,k (z̃k|k−1 ), (4.5)

is a χn2y random variable. The elliptical gate Gi is defined as the region


 
(i,j) T −1 (i,j)
Gi , z (z̃k|k−1 ) Si,k (z̃k|k−1 ) ≤ γG , (4.6)

where γG is the gating threshold or gate size. Given a certain probability that a
true measurement produced by a target i will fall inside its gate, and that the
2
assumption di,j ∼ χn2y holds, a suitable gate threshold can be determined. The
(j) (i)
measurements zk ∈ Gi are considered as candidates for updating the track xk in
the data association algorithm. In other words the gating is a hard decision about
which measurements are feasible measurements for a target.

Now, different conflicts occur. There are several measurements falling within the
4.2 Extension to Multitarget Tracking 49

same gate and there are also measurements falling within more than one gate.
There exist many techniques to solve these conflicts, which are considered to be
the main part of the data association process. The simplest association algorithm
is called nearest neighbor (nn). This approach searches for a unique pairing,
(i) (j)
i.e., one track xk is only updated by at most one observation zk . There are
some possibilities to decide which measurement actually is the nearest. Common
(i,j)
approaches are to choose the measurement with the smallest error z̃k|k−1 or the
(i,j)
smallest statistical distance d 2 (z̃k|k−1 ), defined in (4.5), which is also known as the
Mahalanobis distance, see e.g., Bar-Shalom et al. (2001). The association distance
for each individual track is locally minimized separately with the nn approach,
which could lead to that two tracks are associated with the same measurement.
To avoid this problem it would be more beneficial to find the global minimum
distance considering all tracks simultaneously, and restrict that a measurement
can only be associated with one track. This approach is referred to as global
nearest neighbor (gnn), and its aim is to solve
Nk
X
2
min = di,λ i
(4.7)
λ
i

where λi ∈ {0, 1, . . . , N } indicates which measurement has been assigned to track


i. Note that the value λi = 0 means that track i has not been associated with any
measurement. The so called auction algorithm is the most well known method
to solve this problem, see Bertsekas (1990). Example 4.1 illustrated gating and a
comparison between nn and gnn.

4.1 Example: Gating and Nearest Neighbor Data Association


Consider the target tracking example introduced in Example 1.2. Assume that
(1)
two vehicles are currently tracked. Their predicted measurements ẑk|k−1 and
(2) (j)
ẑk|k−1 are shown in blue and green in Figure 4.3. Six measurements zk , j =
1, . . . , 6 are collected at time k and shown as red stars in the figure. The mea-
surements 2, 4 and 6 fall outside the gates, which are illustrated with ellipses.
Measurement 1 and 3 may be obtained from target 1 since they are both in its
gate. However, measurement 3 is also in the gate of target 2. The nn data asso-
ciation considers each track individually, and since measurement 3 is the closes
measurements of both targets it is used to update both targets as shown in Fig-
ure 4.3a. The gnn data association avoids this problem and considers all tracks
simultaneously. The result is shown in Figure 4.3b, where target 2 is associated
with measurement 5 instated.

There exist a further group of association methods, which uses all measurements
that fall inside the gate. In these methods each measurement j is weighted in
accordance with the probability that it originates from track i. The two most
well known algorithms are probabilistic data association (pda) (Bar-Shalom and
Tse, 1975) and joint probabilistic data association (jpda) (Fortmann et al., 1983),
50 4 Target Tracking

(4) (4)
zk (6)
zk (6)
zk zk

(3) (3)
zk
(1) (1 ) zk (1)
zk (1 ) zk
ẑk |k −1 (2)
ẑk |k −1 (2)
ẑk |k −1 ẑk |k −1

(5) (5)
(2)
zk (2)
zk
zk zk

(a) nn (b) gnn

Figure 4.3: An example with two targets (blue and green) and six measure-
ments (in red). The ellipses are the gates and the two figures show the differ-
ence between the association methods nn and gnn.

where the difference lies in that pda considers each track separately, whereas
jpda forms global hypotheses to calculate the probabilities. Note that the number
of targets is assumed fixed in the above mentioned association methods. These
methods are not further discussed in this thesis, hence a detailed description is
omitted at this place.

4.2.2 Track Management


The main task of the track management is to decide on how many true targets are
observed. Only tracks of sufficient quality should be considered as valid tracks
and used in the data association algorithms described above. According to their
different life stages, tracks can be classified into three cases.
Tentative track: A track that is in the track initiation process. It is not sure that
there is sufficient evidence that it is actually a target or not. A tentative
track is started as soon as it is likely that a measurement originates from a
new target. The tentative tracks can be associated to further measurements.
Confirmed track: A track that was decided to belong to a valid target in the
surveillance area. If several sequential arriving observations indicate that
the tentative track resembles a true target rather than noise or clutter it is
decided to be a confirmed track.
Deleted track: At the other end of the initiation process, this is a track that is
decided to come from all random false alarm. It can also be a track that is
no longer visible to any sensor when the uncertainty of the target increases
above a given threshold. All of its information should be deleted.
When starting a new track some initial guess about the properties of the track
must be made. Put into a Bayesian wording, a prior density p(x0 ), that contains
the information of the state before any observation is made, must be designed.
This can sometimes be a simple task, for instance if you expect targets to ap-
4.3 Extended Target Tracking 51

pear from a certain place, but in most cases it is not easy to know beforehand.
A practical and straightforward method is to use the first measurement as the
mean value of the prior distribution. Other statistics of the prior distribution
are still considered as design variables. A better approach may be considered in
the Kalman filter, where the prior distribution is Gaussian and may be chosen as
x0 ∼ N (0, P0 ). If P0 is chosen very large in comparison with the measurement un-
certainty, the first updated state estimate will largely be based on the associated
measurement.
In order to know if the first measurement stems from a true target or from clut-
ter, the track management system must observe the tentative track for a number
of samples. There exist different methods to decide if a tentative track shall be
transformed into a confirmed track. One of the simplest and most well known
is the so called M/N logic, where a track is confirmed if M measurements out of
N possible are associated with the track. Other common methods are based on
scoring measurements, with for example sequential probability ratio test (sprt),
which was first proposed by Wald (1945). The method is based on comparing
the ratio between the likelihood of the hypothesis that a track describes a true
target and the likelihood of the hypothesis that it is a false alarm. Furthermore,
there exist combined validation and association methods. Integrated probabilis-
tic data association (ipda) expresses both the probability of target existence and
data association based on pda.
The track deletion methods are similar to the validation methods, both the M/N
logic and the score based approaches, mentioned above, can be applied. More
information and details about track management can be found in various books
about target tracking, see e.g., Blackman and Popoli (1999).

4.3 Extended Target Tracking


In classical target tracking problems the objects are modeled as point sources and
it is assumed that only one measurement is received from each target at each time
step. In automotive applications, the targets are at a close distance and of such
a large size that individual features can be resolved by the sensor. A target is
denoted extended whenever the target extent is larger than the sensor resolution,
and it is large enough to occupy multiple resolution cells of the sensor. Put in
other words, if a target should be classified as extended does not only depend on
its physical size, but rather on the physical size relative to the sensor resolution.
The relevant target characteristics that are to be estimated form the target’s state
vector x. Generally, beside the kinematic variables as position, velocity and ori-
entation, the state vector may also contain information about the target’s spatial
extension. However, when the target’s state does not contain any variables re-
lated to the target extent, though the estimation is done as if the target was a
point, the algorithms should still take care of the multiple measurements that
originate from a target. In Paper E a generalized definition of an extended tar-
get is presented, and it is repeated below. This definition does not depend on
52 4 Target Tracking

whether the target extent is estimated or not.

4.2 Definition (Extended Target). A target which potentially gives rise to more
than one measurement per time step irrespective of whether the target’s extent is
explicitly modeled (and/or estimated) or not is referred to as an extended target.

The methods used to track extended target are very similar to the ones used for
tracking a group of targets moving in formation. Extended target tracking and
group tracking are thoroughly described in e.g., Ristic et al. (2004). The bibli-
ography of Waxman and Drummond (2004) provides a comprehensive overview
of existing literature in the area of group and cluster tracking. There exist some
different approaches to represent, i.e., to model, the extended targets, of which
five methods are described in this section.

4.3.1 Point Features


The first and most traditional method is to model the target as a set of point fea-
tures in a target reference frame, each of which may contribute at most one sensor
measurement. The exact location of a feature in the target reference frame is of-
ten assumed uncertain. However, if the appearance of the target is known and
especially if typical radar reflection points are known, then the location of the
features in the target reference frame can be assumed known. The motion of an
extended target is modeled through the process model in terms of the translation
and rotation of the target reference frame relative to a world coordinate frame,
see e.g., Dezert (1998).
For an application in two dimensions, the point features are defined as
n oNp h iT
PT = p(i) with p(i) = x(i) y(i) (4.8)
i=1
and these are usually expressed in a target fixed coordinate frame T . The position
W T of the target’s origin and the orientation ψ of the target’s
h i
dTWW = xW TW y TW T
frame is tracked relative to the world coordinate frame. The state vector may be
defined as
h iT
x = dTWW ψT PT . (4.9)
The point features in the target’s coordinate frame can be mapped into a point in
the world frame through the transform
p(i),W = RW T p(i),T + dTWW . (4.10)
The equation above constitutes the measurement model if the point measure-
ments are expressed in the world coordinate frame W. The rotation matrix is
given by
" #
WT cos ψT − sin ψT
R = . (4.11)
sin ψT cos ψT

The uncertainty about the exact position of the point feature is modeled accord-
4.3 Extended Target Tracking 53

ing to
Np
Y
p(P W
|dTWW , ψT ) = N (p(i),W |RW T (ψT )p(i),T + dTWW , wp I2 ), (4.12)
i=1
which means that the uncertainty is assumed isotropic around the mean location
of the point and with known variance wp .
z N
At each time step a set of Nz measurements Z = {zi }i=1 is received and has to be as-
sociated to the states. Not all measurements arise from a point feature, some are
due to false detections (clutter). The association hypotheses are derived through
some data association algorithm. In Vermaak et al. (2005) a method is proposed
where the association hypotheses are included in the state vector and the output
of the tracking filter is a joint posterior density function of the state vector and the
association hypotheses. Furthermore, a multi-hypothesis likelihood is obtained
by marginalizing over all the association hypotheses. An alternative solution is
also proposed using a particle filter, where the unknown hypotheses are sampled
from a well designed proposal density function.
An automotive radar sensor model developed for simulation purposes is pro-
posed in Bühren and Yang (2006), where it is assumed that radar sensors often
receive measurements from specific reflection centers on a vehicle. These reflec-
tion centers can be tracked in a filter and valuable information regarding the
vehicle’s orientation can be extracted as shown by Gunnarsson et al. (2007). A
difficulty in solving the data association problem is the large number of associ-
ation hypotheses available. To reduce the complexity Gunnarsson et al. (2007)
proposes an approach where detections are associated with reflector groups. The
spatial Poisson distribution, discussed in the subsequent section, is considered to
be inappropriate, since the number of vehicle detections is assumed essentially
known and not adequately modeled by a Poisson process.
In paper F the point features are denoted measurement generating points (mgp),
and they are positioned on the surface of the target. In the referred publication,
the mgps are not considered having a fixed position on the surface, but they are
instead defined as a random finite set of points on the one-dimensional surface of
the target. The positions of the points on the surface are estimated in the target
tracking filter, but they are of less importance, since they only serve as a means
to estimate the position, shape and size of the entire target.

4.3.2 Spatial Distribution


Instead of modeling the target as a number of point features, which are assumed
to be explicit measurement sources, the target may also be represented by a spa-
tial probability distribution. It is more likely that a measurement comes from a
region of high spatial density than from a sparse region. In Gilholm and Salmond
(2005); Gilholm et al. (2005) it is assumed that the number of received target and
clutter measurements are Poisson distributed, hence several measurements may
originate from the same target. Each target related measurement is an indepen-
54 4 Target Tracking

dent sample from the spatial distribution. The spatial model could be a bounded
distribution, such as a uniform pdf or an unbounded distribution, such as a Gaus-
sian. The Poisson assumption allows the problem, or more specifically the evalu-
ation of the likelihood, to be solved without association hypotheses. The spatial
distribution is preferable where the point source models are poor representations
of reality, that is in cases where the measurement generation is diffuse.
In Gilholm and Salmond (2005) two simple examples are given. One where the
principle axis of the extended target is aligned with the velocity vector, i.e., the
target is represented by a one dimensional uniform stick model. In the other
example, a Gaussian mixture model is assumed for the target. A Kalman filter
implementation with explicit constructions of assignment hypotheses is derived
from the likelihood in Gilholm and Salmond (2005), whereas in Gilholm et al.
(2005), a particle filter is applied directly given the likelihood which is repre-
sented by the Poisson spatial model of the stick. Hence, the need to construct
explicit measurement-target assignment hypotheses is avoided in Gilholm et al.
(2005). Swain and Clark (2010) proposes instead a standard measurement model
but represents instead the extended targets as a spacial cluster process.
Boers et al. (2006) presents a similar approach, but since raw data is considered,
no data association hypotheses are needed. The method to use raw data, i.e., con-
sider the measurements without applying a threshold, is referred to as track be-
fore detect. A one dimensional stick target is assumed also by Boers et al. (2006),
but unlike Gilholm and Salmond (2005), the target extent is assumed unknown.
The state vector is given by the stick’s center position and velocity as well as the
stick’s extension according to
h iT
x = x y ẋ ẏ l . (4.13)
The process model is a simple constant velocity model and the length l is modeled
as a random walk. The likelihood function is given by the probability distribution
Z
p(z|x) = p(z|x̃)p(x̃|x)d x̃, (4.14)

where the spatial extension is modeled by the pdf p(x̃|x) and x̃ is assumed to be
a point source from an extended target with center given by the state vector x.
Hence, a measurement is received from a source x̃ with likelihood p(z|x̃).

4.3.3 Elliptical Shaped Target


In many papers dealing with the shape of a target it is assumed that the sensor,
e.g., radar, is also able to measure one or more dimensions of the target’s extent. A
high-resolution radar sensor may provide measurements of a targets down-range
extent, i.e., the extension of the objects along the line-of-sight. The information
of the target’s extent is incorporated in the tracking filter and aids the tracking
process to maintain track on the target when it is close to other objects.
An elliptical target model, to represent an extended target or a group of targets,
is proposed in Drummond et al. (1990). The idea was improved by Salmond and
4.3 Extended Target Tracking 55

Parr (2003), where the sensor not only provides measurements of point observa-
tions, but rather range, bearing and down-range extent. The prime motivation of
the study is to aid track retention for closely spaced moving targets. Furthermore,
the state vector includes the position, velocity and the size of the ellipse. An ekf
is used in Salmond and Parr (2003), but it is concluded that the filter may di-
verge under certain conditions, since the relation between the down-range extent
measurement of the target and the position and velocity coordinates in the state
vector is highly nonlinear. The same problem is studied in Ristic and Salmond
(2004), where a ukf is implemented and tested. Even though the ukf shows bet-
ter performance it is concluded that neither the ekf nor the ukf are suitable for
this problem. The problem is further studied by Angelova and Mihaylova (2008),
where other filter techniques, based on Monte Carlo algorithms, are proposed. In
this paper the size of the ellipse takes values from a set of standard values, i.e.,
the algorithm estimates the type of object from a list, under the assumption that
typical target sizes are known.

A group of objects moving collectively may also be modeled as an extended target.


The ellipse model is used to model a formation of aircraft in Koch (2008). The
object extension is represented by a symmetric positive definite random matrix,
however, the measurement error is not considered in this publication. Improve-
ments of this approach, including the consideration of sensor error, has been
published by Feldmann et al. (2011). An alternative measurement model and
an extesion using principal components is proposed by Degerman et al. (2011).
Wieneke and Davey (2011) show how the random matrix approach can be used
to track multiple extended targets directly from image data.

The concept of random hypersurface model, introduced by Baum and Hanebeck


(2009), assumes that each measurement source is an element of a randomly gen-
erated hypersurface. In this publication an elliptical target shape is used to exem-
plify the approach. Improvements of the approach has been published in Baum
et al. (2010), and a generalization which allows for tracking detail shapes, based
on star convex shaped extended targets, is presented in Baum and Hanebeck
(2011).

4.3.4 Curved Target


In Paper C the road borders are modeled as extended targets in the form of curved
lines. A curved line is expressed as a third order polynomial in its coordinate
frame. Since the road borders are assumed to be stationary, the frames are not in-
cluded in the state vector. Furthermore, stationary points such as delineators and
lamp posts are also modeled in Paper C. The nearest neighbor algorithm is used
to associate measurements from stationary observations z(m) to the targets. Here
it is assumed that an extended line target L(j) can give rise to several measure-
ments, but a point target P(i) can only contribute to one measurement. Since the
likelihood of a line `m,j is a one dimensional spatial density function, but the like-
lihood of a point `m,i is given by a two dimensional density function, a likelihood
ratio test is applied to determine the measurement-to-track association problem.
56 4 Target Tracking

The likelihood ratio for a measurement z(m) is given by


p
(m) `m,i
Λ(z ) , . (4.15)
`m,j
where the square root of the point likelihood is taken for unit matching, since the
unit of the point likelihood is (distance)−2 and the unit of the line likelihood is
(distance)−1 . The corresponding likelihood ratio test is
H0
Λ(z(m) ) ≷ η, (4.16)
H1

where H0 and H1 correspond to hypotheses that the measurement z(m) is asso-


ciated to the point Pi and to the line Lj , respectively. The threshold is selected
experimentally. More theory about likelihood ratio test is given by e.g., van Trees
(1968).

4.3.5 Extended Target Tracking and PHD filter


In the recent work Mahler (2009a) an extension of the (phd) filter to also han-
dle extended targets of the type presented in Gilholm et al. (2005) is given. The
phd filter is the topic of next section and therefore it will only be briefly summa-
rized here. In Paper E a Gaussian-mixture implementation of the phd-filter for
extended targets called the extended target gm-phd-filter (et-gm-phd) is pre-
sented. It is shown that this filter works well in most situation, but that it is
sensitive in estimating the number of targets in a few situation e.g., when occlu-
sion occurs. Therefore, generalization of Mahler’s work has been made to derive
the cardinalized phd (cphd) filter for extended targets, presented by Orguner
et al. (2011). In addition to the derivation, a Gaussian mixture implementation
for the derived cphd filter is presented. This filter has less sensitive estimates
of the number of targets. Early results on laser data are shown which illustrates
robust characteristics of the cphd filter compared to its phd version.
Initial step have also been taken towards including estimation of target extent in
the gm-phd-filter, see Granström et al. (2011b). The state vector is augmented
with a discrete state describing the type of object e.g., rectangle, ellipse etc. In Pa-
per F a hybrid state space is introduced, where mgps and the measurements are
modeled by random finite sets and target states by random vectors. For each re-
alization of the state vector, a phd filter is utilized for estimating the conditional
set of mgps given the target states.
Probability Hypothesis Density Filter
5
and Its Implementation

In the previous chapter it was shown how single target tracking filters can be
extended and used to handle multiple targets by enclosing the filters with a tar-
get management framework. In this way measurements are associated to targets,
which are tracked as isolated entities. To be able to perform the data association
it is assumed that the number of present targets is known. This is rarely true, and
therefore a track management logic estimated the number of valid and confirmed
tracks. These methods explicitly separate the estimation of the number of targets
and estimation of target states, which is suboptimal. Random finite set (rfs)
formalism introduces a more rigorous approach to the multi target tracking prob-
lem. A set contains the random number of single target tracks, of which each is a
random vector. The state vectors are elements of the rfs and their change in time
is described with a motion model. The overall tracking problem is to compute
the posterior density of the set-valued quantity, which also enables the approach
to describe the uncertainty in the number of objects. The ordering of each single
target state is not of importance in the the set notation; the estimation problem is
reduced to capture the most essential part of the multi target tracking problem,
estimating the number of objects and their individual states.
This chapter is split into three sections, which takes the estimation problem from
a pure theoretical view into a practical implementation. The chapter begins in
Section 5.1 with an introduction to the Bayes formulation of the rfs filter, which
propagates the complete density of the rfs. This approach is not practically im-
plementable, and therefore a solution is to only propagate the first order mo-
ment, called the probability hypothesis density (phd), as described in Section 5.2.
There exist a few practical representations of the phd, of which the Gaussian mix-
ture has turned out to be the most well-used during the last years. The Gaussian
mixture phd filter (gm-phd) is the topic of Section 5.3. This chapter aims at sum-

57
58 5 PHD Filter and Its Implementation

observation space

Zk+1
Zk

state space

meta target motion


Xk+1
Xk

Figure 5.1: Illustration of the rfs of states and measurements at time k and
k + 1. Note that this is the same setup as previously shown for the standard
multitarget case in Figure 4.2.

marizing the basic ideas and implementation methods, more details can be found
in the textbook by Mahler (2007a), and other overviews are given by Mahler
(2009b); Challa et al. (2011). Point process theory is described by e.g., Daley
and Vere-Jones (2003); Streit (2010).

5.1 Introduction to Finite Set Statistics


The rfs estimation problem is in general solved in the same way as the single
target tracking problem is approached, i.e., using Bayes’ theorem. However, with
the difference that the density is defined on a set rather than on a vector. Finite
set statistics (fisst), introduced by Goodman et al. (1997); Mahler (2004), is a
multisource-multitarget differential and integral calculus based on the so called
belief-mass function, and the fact that it is the multisensor-multitarget counter-
part of the probability-mass function for single target.

The conceptual idea of fisst is to redefine the target set as a single target, a so
called meta target, with multitarget state
X = {x(1) , x(2) , . . . , x(Nx ) }, (5.1)
a so called meta state; and similarly to redefine the observation set
Z = {z(1) , z(2) , . . . , z(Nz ) }, (5.2)
as a single measurement, a meta measurement, of the observed meta targets. The
concept is illustrated in Figure 5.1, where meta targets are shown in the state
space in the lower layer and meta observations in the observation space in the
upper layer. Furthermore, the multitarget multisensor data can be modeled using
a multisensor-multitarget measurement model
Zk = H(Xk ) ∪ Ck , (5.3)
5.1 Introduction to Finite Set Statistics 59

where H(Xk ) is the rfs of measurements originating from the true target and C
is the clutter rfs. To see the similarity, compare this multitarget model with the
single target measurement model (2.1a). The motion of the meta target can be
modeled using a multitarget motion model
Xk+1 = F(Xk ) ∪ Bk (5.4)
where F( · ) models the change of a rfs from time k to k + 1, and B is the rfs
of born targets. Compare, also here, with the single target motion model (2.1b).
Given this the multisensor, multitarget estimation problem can be reformulated
into a single-sensor, single-target problem. The rfs is formally defined in Sec-
tion 5.1.1, followed by a definition of the belief-mass function and multitarget
density function in Section 5.1.2. The multisource multitarget Bayes filter is de-
scribed in Section 5.1.3.

5.1.1 Random Finite Set


Before continuing describing the fisst and phd a formal definition of a rfs is
given below.
5.1 Definition (random finite set). A random finite set (rfs) Ψ is a random
variable that has realizations Ψ = Y ∈ Y where the hyperspace Y is the set of all
finite subsets of some underlying space Y0 .

Since the definition may seem a bit abstract it is illustrated with a simple example,
related to target tracking.
5.2 Example: Target Tracking
Consider the target tracking example, which was introduced in Example 1.2. In
the example the state vector represents the Cartesian position of a target, i.e.,
nx = 2. The state space is defined as an Euclidean vector space, i.e., x ∈ R2 . This
state space is the underlying space Y0 in the definition above. The hyperspace
Y includes all finite subsets of Y0 = Rnx . Let the state variable vectors be x(i) ∈
Y0 = R2 for i = 1, . . . , ∞, then some realizations X of the random set X can be
X = ∅ (no targets), X = {x(1) , x(2) } (two targets with states x(1) and x(2) ), or X =
{x(1) , . . . , x(Nx ) } (Nx targets with states x(1) , . . . , x(Nx ) ).

5.1.2 Belief-Mass and Multitarget Density Function


The reformulation of the multisensor multitarget estimation problem as a single-
metasensor, single-metatarget problem is based on the concept of belief-mass.
Belief-mass functions are nonadditive generalizations of probability-mass func-
tion. In order to let the content be more easily digested, the multitarget notation
is accompanied with the single target analogous notation.
Starting with some simple notation taken from single target tracking (cf. Sec-
tion 4.1). For a random vector y the probability mass function py (S) gives the
probability of y being in some subset of the region S ⊆ Y0 , i.e.,
py = Pr(y ∈ S). (5.5)
60 5 PHD Filter and Its Implementation

The probability density function py (y) describes the likelihood of y to occur at a


given point y. The relation between the mass function and the density function
is given by
dpy
py (y) = . (5.6)
dy
The probability mass function for a random vector can be generalized to the
belief-mass function for rfs. The belief-mass function is denoted βΨ (S), and
is defined as the probability that the random finite set Ψ on Y0 is within some
region S, βΨ (S) = Pr(Ψ ⊆ S). Similarly, the probability density function for a
random vector can be generalized to the probability density function pΨ (Y ) of
a random finite set Ψ . The relation between the probability density function
Ψ pΨ (Y ) of a random finite set and the belief-mass function βΨ (S) is given by
Z
βΨ (S) pΨ (Y ) δY = Pr (Ψ ⊆ S) (5.7)
S

and
δβΨ
pΨ (Y ) =(∅). (5.8)
δY
δ·
R
Here, · δY denotes the set integral, and δY denotes the set derivative. The
definitions of the set integral and the set derivative are too complicated to present
here, the interested reader can consult Mahler (2007a).

These definitions have the following practical use when applied to the likelihood
function and the Markov transition density

• In the single target case the probabilistic-mass function of the sensor model
p(S|x) = Pr(z ∈ S|x) is the probability that the random observation z will
be found in a given region S if the target state is x. Analogously, in the
multitarget case the belief-mass function β(S|X) = Pr(Z ⊆ S|X) is the total
probability that all observations will be found in any given region S, if the
multitarget state is X. In the single target case the likelihood function is
obtained by differentiation of the probabilistic-mass function. The same
concept holds for the multitarget case, i.e., the true multitarget likelihood
function p(Z|X) can, using fisst, be derived from β(S|X) according to
δβk|k (∅|Xk )
pk|k (Zk |Xk ) = (5.9)
δZk
δ
where δZk is the set derivative.

• Similarly, the probability-mass pk+1|k (S|x) = Pr(xk+1 ∈ S|xk ) of a single tar-


get motion model is the probability that the target will be found in region S
at time step k + 1, given that it had state xk at time step k. In the multitarget
case the belief-mass function βk+1|k (S|X) = Pr(Xk+1 ⊆ S|Xk ) of a multitar-
get motion model is the total probability of finding all targets in region S
at time k + 1, if at time k they had a multi object state Xk . By differentiat-
5.1 Introduction to Finite Set Statistics 61

ing p(S|xk ) the Markov transition density pk+1|k (xk+1 |xk ) can be derived in
the single target case. Using fisst the multitarget Markov density can be
derived by taking the set derivative of the belief-mass function
δβk+1|k (∅|Xk )
pk+1|k (Xk+1 |Xk ) = (5.10)
δXk+1

5.1.3 The Multitarget Bayes Filter

Consider the single target state propagation summarized in Section 4.1. The filter
recursion is extended to the multitarget case under the fisst assumptions in this
section. The single target random state variables x are substituted with the rfs
of a set of targets. Before giving the filter recursion, the interpretation of a set
probability function is discussed. Consider the multitarget state propagation
predictor corrector
· · · → pk|k (Xk |Z1:k ) → pk+1|k (Xk+1 |Z1:k ) → pk+1|k+1 (Xk+1 |Z1:k+1 ) → · · ·

The meaning of the underlined prior is exemplified below,

p(∅|Z1:k ) probability that there are no targets present


(1) (1)
p({xk }|Z1:k ) likelihood of one target with state xk
(1) (2) (1) (2)
p({xk , xk }|Z1:k ) likelihood of two targets with state xk , xk
..
.
(1) (Nx ) (1) (Nx )
p({xk , . . . , xk }|Z1:k ) likelihood of Nx targets with state xk , . . . , xk

i.e., the set density is not only a function of the state variable values, but also a
function of the number of targets Nx in the set.

The single target Bayes filter equations were stated in (3.13), the multitarget
Bayes filter counterpart has the form
pk (Z1:k |Xk ) · pk|k−1 (Xk |Z1:k−1 )
pk|k (Xk |Z1:k ) = R (5.11a)
pk (Z1:k |Xk ) · pk|k−1 (Xk |Z1:k−1 ) δXk
Z
pk+1|k (Xk+1 |Z1:k ) = pk+1|k (Xk+1 |Xk ) · pk|k (Xk |Z1:k )δXk (5.11b)

where pk (Zk |Xk ) is the multisource likelihood function, pk+1|k (Xk+1 |Xk ) is the
multitarget Markov transition function, pk|k−1 (Xk |Zk−1 ) is the multitarget prior,
pk|k (Xk |Zk ) is the multitarget posterior and pk+1|k (Xk+1 |Zk ) is the multitarget pre-
62 5 PHD Filter and Its Implementation

diction. The filter recursion is given by,


predictor corrector
· · · → pk|k (Xk |Z1:k ) → pk+1|k (Xk+1 |Z1:k ) → pk+1|k+1 (Xk+1 |Z1:k+1 ) → ···

↑ ↑

Markov density Likelihood function


p(Xk+1 |Xk ) p(Zk+1 |Xk+1 )
↑ ↑
Motion Model
S Measurement Model
S
Xk+1 = F(Xk ) Bk Zk+1 = H(Xk+1 ) Ck+1
|{z} |{z} | {z } |{z}
persisting new generated clutter
observations

compare with the single target filter recursions given on the same form in Sec-
tion 3.2. Note that the motion model describes a union of the set of persisting
targets and the set of new born, or detected, targets. In the same manner, the
measurement model describes a union of the set of target generated observations
and the set of clutter measurement. Here, the measurement set Z is the set of all
observations received by the sensor.
The conclusions of this section can be summarized in a few items, which will be
furher addressed in the next section:
• The Bayesian filter on sets is very similar to standard Bayesian density re-
cursion.
• Related single target densities and operations are replaced with their set
equivalents, compare (5.11) with (3.13).
R
• A set integral · δX is necessary to derive the posterior
Z
p(Xk |Z1:k ) ∝ p(Zk |Xk ) p(Xk |Xk−1 )p(Xk−1 |Z1:k−1 )δXk−1

• Therefore, the filter is computationally prohibitive.

5.2 Introduction to the PHD filter


The multisource multitarget filter as described in the last section is computation-
ally demanding to implement. This section describes a new approach, first pro-
posed by Mahler (2003), for approximate multitarget nonlinear filters called the
probability hypothesis density (phd) filter. The phd is a first order moment ap-
proximation of the rfs. To describe this approximation the single target case will
first be repeated in Section 5.2.1 and then compared with the multitarget case in
Section 5.2.2, where also the phd is defined. The phd filter recursion is explained
in Section 5.2.3. The section is concluded in Section 5.2.4 with a short literature
survey of generalizations and extensions of the phd filter.
5.2 Introduction to the PHD filter 63

5.2.1 Approximations in Single-Target Tracking

The single target density propagation is illustrated below


predictor corrector
· · · →pk|k (xk |z1:k ) → pk+1|k (xk+1 |z1:k ) → pk+1|k+1 (xk+1 |z1:k+1 )→ · · ·
↓ ↓ ↓
predictor corrector
x̂k|k → x̂k+1|k → x̂k+1|k+1
At each time step the first order moment, i.e., the expected value may be col-
lapsed from the density function. In the so called constant-gain filter, of which
the alpha-beta filter is the most well known, only the expected value is propa-
gated over time. This makes the filter computationally faster. Both the first and
the second order moments are propagated in the standard Kalman filter, com-
pare with Section 3.2.1. If the density functions are Gaussian, the first and sec-
ond order moments are the sufficient statistics, hence propagating these variables
describes the density functions sufficiently. However, if the density functions are
not Gaussian, propagating the moments is only an approximation of the density.

5.2.2 Approximations in Multitarget Tracking

Based on the same analogy as with the single target tracking the density of the
rfs my be compressed to the first order moments, and these may be propagated
as follows:
predictor corrector
· · · →pk|k (Xk |Z1:k ) → pk+1|k (Xk+1 |Z1:k ) → pk+1|k+1 (Xk+1 |Z1:k+1 )→ · · ·
↓ ↓ ↓
predictor corrector
Dk|k (xk |Z1:k ) → Dk+1|k (xk+1 |Z1:k ) → Dk+1|k+1 (xk+1 |Z1:k+1 )
The first order moment is called a phd and denoted D. However, it is not obvious
what the multitarget counterpart of an expected value is; a naïve defininition
would be
Z
E(Ψ ) = X · pΨ (X)δX, (5.12)

where pΨ (X) is the multitarget probability density of a rfs Ψ . However, this inte-
gral is not mathematically defined, since addition of finite subsets is not usefully
defined. A strategy is to transform subsets X into vectors TX in some vector space.
It is important that this transformation X → TX preserves set-theoretic structures,
i.e., transforming unions into sums TX∪X 0 = TX + TX 0 , whenever X ∩ X 0 = ∅. An
indirect expected value can now be defined according to
Z
E(Ψ ) = TX · pΨ (X)δX. (5.13)

The vector is commonly chosen as TX = δX (x) with



0 if X = ∅


TX = P (5.14)
 y∈X δy (x) otherwise

64 5 PHD Filter and Its Implementation

where δy (x) is the Dirac delta function with peak at y. Given these assumptions,
the multitarget analog of the expected value is given by
Z
DΨ (x) = δX (x) · pΨ (X)δX. (5.15)

To exemplify this rather theoretical discussion, consider a realization Xk of the


(1) (2) (N )
rfs Xk = {xk , xk , . . . , xk x }. Define a scalar valued function of Xk according to
N
X
δXk = δ (i) (x), (5.16)
xk
i=1

as illustrated in Figure 5.2a. Then, the probability hypothesis density (phd) is


the expectation of δXk with respect to Xk
Z
Dk|k (x) = E(δXk ) = δXk · pΨ (X)δX. (5.17)

The expected value is illustrated in Figure 5.2b. Note also that the expected num-
ber of targets in the volume S is
Z
bx = Dk|k (x)dx
N (5.18)
S

In fact one can define the phd using the expected number of Rtargets as follows:
Given any region S of single target state space X0 the integral S Dk|k (x)dx is the
expected number of targets in S.

The phd is an intensity function, which models the joint density over the states.
The phd is multi-modal, with peaks near the actual targets, see Figure 5.3. This
figure shows the phd approximation of the rfs in Figure 5.1. The intensity func-
tion is used in Paper D to represent the density of stationary targets along the
edges of a road. An autonomous driving vehicle should avoid areas with high
intensity and the objective should be to keep the vehicle in the valleys of the
intensity function.

5.2.3 The PHD Filter

The phd filter consists of two equations, the predictor equation, which extrapo-
lates the current phd to the predicted phd at the time of the new observation,
and the corrector equation, which updates the predicted phd with the new obser-
vations.

The phd predictor equations takes care of the motion of existing targets, birth
and spawn of new targets and the death of existing targets. The components of
the prediction model are summarized below:
5.2 Introduction to the PHD filter 65

10

0
1
1
0.5
0.5

0 0
(a) Sum of Dirac functions δXk

10

0
1
1
0.5
0.5

0 0
(b) Expectation E(δXk )

Figure 5.2: Definition and example of the phd, the Dirac functions repre-
sents the rfs and the surface the first order moment, i.e., the phd.
66 5 PHD Filter and Its Implementation

Figure 5.3: Illustration of the phd for the time steps k and k + 1. This is the
pdf of the rfs in Figure 5.1.

model time step description


motion: p(xk+1 |xk ) xk → xk+1 likelihood that target will have state
xk+1 if it had state xk
death: 1 − PS (xk ) xk → ∅ probability that target will vanish if
it had state xk
spawn: b(Xk+1 |xk ) xk → Xk+1 likelihood that a target will spawn
target set Xk+1 if it had state xk
birth: b(Xk+1 ) ∅ → Xk+1 likelihood that a target set Xk+1 will
appear in scene

In the table above PS (x) denotes the probability that a target with state x at time k
will survive at time k + 1, p(xk+1 |xk ) is the single target Markov transition density,
b(Xk+1 |xk ) is the phd of targets spawned by other targets with state x, and b(Xk+1 )
is the phd of new appearing targets. The phd predictor equation is given by
Dk+1|k (xk+1 |z1:k ) =
| {z }
time updated PHD
Z h i
= bk+1|k (xk+1 ) + PS (ξk ) p(xk+1 |ξk ) + b(xk+1 |ξk ) Dk|k (ξk |z1:k ) dξk .
| {z } | {z } | {z } | {z } | {z }
target birth PHD probability of Markov transition target spawn by PHD from previous k
survival density existing targets
(5.19)
The parts of the model are illustrated with a number of plots in Figure 5.4. One
example of a phd Dk|k (ξ|z1:k ) from the previous time step k is illustrated in Fig-
ure 5.4a. The magnitude of the phd is reduced a bit when multiplied with the
probability of survival PS (ξ) in Figure 5.4b. Thereafter, multiplying with the
Markov transition density and then integrating changes the mean value and the
covariance, see Figure 5.4c. Note, that all target motions are assumed statistically
independent. In Figure 5.4d, the spawn phd b(xk+1 |ξk ) is added in red, and in
Figure 5.4e the birth phd bk+1|k (xk+1 ) in green. Finally, the total predicted phd
Dk+1|k (xk+1 |z1:k ), after time update, is shown in Figure 5.5a.
5.2 Introduction to the PHD filter 67

0.5 0.5

0.4 0.4

PD Dk−1
0.3 0.3
k−1
D

0.2 0.2

0.1 0.1

0 0
−5 0 5 10 −5 0 5 10
x x

(a) phd from previous k (b) multiplied with PS

0.5 0.5
∫ (PD p(xk|ξ)+b(xk|ξ) Dk−1 d ξ

0.4 0.4
∫ PD p(xk|ξ)Dk−1 d ξ

0.3 0.3

0.2 0.2

0.1 0.1

0 0
−5 0 5 10 −5 0 5 10
x x

(c) multiplied with transition density (d) adding spawn phd


and integrated

0.5 0.5
∫ (PD p(xk|ξ)+b(xk|ξ) Dk−1 d ξ + b

0.4 0.4

0.3 0.3
Dk|k−1

0.2 0.2

0.1 0.1

0 0
−5 0 5 10 −5 0 5 10
x x

(e) adding birth phd (f) predicted phd

Figure 5.4: An example showing the components of the predictor equa-


tion (5.19) of phd filter.
68 5 PHD Filter and Its Implementation

The phd corrector equations takes care of the likelihood of the existing targets,
misdetection of targets and clutter. The components of the prediction model are
summarized below:
model update description
likelihood: p(zk |xk ) xk → zk likelihood that target will gener-
ate observation zk if it has state
xk
misdetection: (1 − PD (xk )) xk → ∅ state dependent probability that
target will not generate an obser-
vation
clutter: c(Zk ) ∅ → Zk likelihood that a set Zk =
(1) (M)
{zk , . . . , zk } of clutter observa-
tions will be generated

In the table above PD (xk ) is the probability of detection of a target with the state
x at time step k, p(zk |xk ) is the single target likelihood function and c(Zk ) is the
density of Poisson false alarms, due to clutter. The modeling is done under the
assumption that the observations and the clutter are statistically independent.
Furthermore it is assumed that the multitarget prediction is approximately Pois-
(1) (M)
son. Given a new scan of data Zk = {zk , . . . , zk } the corrector equation is given
by
X Λk|k (x|z)
Dk|k (x|Z1:k ) ≈ R + (1 − PD (x)) Dk|k−1 (x|Z1:k−1 ) (5.20)
| {z } z∈Zk λk ck (z) + Λk|k (x|z)dx | {z }
updated PHD | {z } predicted PHD
pseudo-likelihood

where λk is the average number of false alarms. Furthermore


Λk|k (x|z) = PD (x)p(z|x)Dk|k−1 (x|z1:k ). (5.21)
The corrector equation is again exemplified by a series of plots in Figure 5.5. Fig-
ure 5.5a shows the predicted phd. The non-detected targets are modeled by mul-
tiplying the predicted phd with (1 − PD (x)), as illustrated in red in Figure 5.5b.
Assume that two detections are made; the likelihood functions p(z|x) of these indi-
vidual observations are shown in Figure 5.5c. The pseudo-likelihood function of
the detections multiplied with the predicted phd is shown in blue in Figure 5.5d.
Finally, the updated posterior phd is illustrated in Figure 5.5e.

In Paper E, the pseudo-likelihood function is modified to handle extended targets.


This modification is one of the major contributions of that paper.

5.2.4 Generalizations of the PHD filter


There exists an improvement of the phd filter, called the cardinalized probabil-
ity hypothesis density (cphd) filter, which also propagates the probability dis-
tribution on the target number. This filter generally performs better, but with
increased computational load. The cphd filter is too complex to be described in
5.2 Introduction to the PHD filter 69

0.5 0.5

0.4 0.4

(1−PD)Dk|k−1
0.3 0.3
k|k−1
D

0.2 0.2

0.1 0.1

0 0
−5 0 5 10 −5 0 5 10
x x

(a) phd from prediction (b) Non-detected targets

0.5 0.5

0.4 0.4
Σ Dk|k(x|z)

0.3
p(z|x)

0.3

0.2 0.2

0.1 0.1

0 0
−5 0 5 10 −5 0 5 10
x x

(c) likelihood functions of two measure- (d) Detected and non-detected targets
ments

0.5

0.4

0.3
Dk

0.2

0.1

0
−5 0 5 10
x

(e) Updated phd

Figure 5.5: Steps in the phd update and corrector equations (5.20).
70 5 PHD Filter and Its Implementation

the present chapter, and it is also not used in any of the publications in Part II.
Details about the cphd filter can be found in Mahler (2007b,a), and a Gaussian
mixture implementation is described in Vo et al. (2007).
The standard phd filter as described in this section does not maintain track labels
from time step to time step. A so called peak to track association technique has
been proposed by Lin et al. (2004); Panta et al. (2004) in order to maintain track
labels.
Furthermore, phd smoother has been studied in Mahler et al. (2010); Clark (2010),
and a Gaussian mixture implementation is presented by Vo et al. (2010, 2011).
Again, since the smoother is not further used in Part II, it is also not described
here.
The so called intensity filter, developed by Streit, is another generalization of the
phd filter. The difference is that in the phd filter certain prior assumptions re-
garding target birth and measurement clutter are made. More specifically the
intensity filter uses an augmented single target state space, while the phd filter
uses only the standard single target state space. The augmented state space rep-
resents the absent target hypothesis, and it therefore allows the filter to on-line
estimate the intensities of the target birth and measurement clutter Poisson point
processes. Note that the phd is an intensity function, and in the phd filter the
birth and clutter intensities are known a priori. A thorough description of the
intensity filter is given in the textbook by Streit (2010) and a summary is given in
the publications Streit and Stone (2008); Streit (2008).

5.3 Gaussian Mixture Implementation


There exist primarily two different implementations of the phd filter, the smc
approximation, see Vo et al. (2003); Sidenbladh (2003); Zajic and Mahler (2003),
and the Gaussian mixture (gm) approximation, introduced in Vo and Ma (2006),
and hence called gm-phd filter. A physical interpretation of the phd filter is
proposed by Erdinc et al. (2009). Both approximations of the phd are exemplified
in Figure 5.6.
The advantage of the smc approximation is that it can accommodate highly non-
linear motion and measurement models. The disadvantages are that it is com-
putational demanding. The smc based approximation is not further used in this
thesis and it is therefore not described in further detail here, the interested reader
is referred to Vo et al. (2003) or the textbook Mahler (2007a). The gm implemen-
tation is discussed in more detail in this section.
The gm-phd filter has the following properties,
• It is less computationally demanding than the smc solution.
• It is exact, i.e., it provides a true closed-form algebraic solution.
• It is easy to implement,
5.3 Gaussian Mixture Implementation 71

0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4
Dk

Dk
0.3 0.3

0.2 0.2

0.1 0.1

0 0
−5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5
state space state space

(a) Sequential Monte Carlo phd approx- (b) Gaussian mixture phd approxima-
imations (smc-phd) tion (gm-phd)

Figure 5.6: Practical implementation methods of the phd.

• however, measurement and motion models must be linear-Gaussian.


This section is outlined as follows. The approximations made in the gm-phd
filter are summarized in Section 5.3.1 and the algorithm is given in Section 5.3.2.
Some practical details regarding the implementation, e.g., target extraction are
described in Section 5.3.3.

5.3.1 Gaussian Mixture PHD Approximation


Similarly, as first shown for the theoretical Bayesian multitarget filter in Sec-
tion 5.1.3 and then for the more practical standard phd filter in Section 5.2.2,
the easily implementable filter recursion of the gm-phd filter is summarized be-
low:
predictor corrector
··· → Dk|k (x|Z1:k ) → Dk+1|k (x|Z1:k ) → Dk+1|k+1 (x|Z1:k+1 )

↓ ↓ ↓

PJk|k (i) 
(i) (i)
 PJk+1|k (i) 
(i) (i)
 PJk+1|k+1 (i) 
(i) (i)

··· → w N x; µ , P → w N x; µ ,P → w N x; µ ,P
i=1 k|k k|k k|k i=1 k+1|k k+1|k k+1|k i=1 k+1|k+1 k+1|k+1 k+1|k+1
↓ ↓ ↓

(1) (Jk|k ) (1) (Jk+1|k ) (1) (Jk+1|k+1 )


µ ,...,µ → µ ,...,µ → µ ,...,µ
k|k k|k k+1|k k+1|k k+1|k+1 k+1|k+1
(1) (Jk|k ) (1) (Jk+1|k ) (1) (Jk+1|k+1 )
P ,...,P → P ,...,P → P ,...,P
k|k k|k k+1|k k+1|k k+1|k+1 k+1|k+1
(1) (Jk|k ) (1) (Jk+1|k ) (1) (Jk+1|k+1 )
w ,...,w → w ,...,w → w ,...,w
k|k k|k k+1|k k+1|k k+1|k+1 k+1|k+1

The phd is represented by a Gaussian mixture and the summary statistics of the
Gaussian components, i.e., the mean value µ and the covariance P are propagated
together with a weight w for each Gaussian component. The number of Gaussian
components at each time step is denoted J. The prior, predicted and posterior
gm-phd are shown above.
When deriving the gm-phd filter some assumptions are made, see Vo and Ma
(2006). Since these assumptions are formally repeated in Paper E, on Page 230,
72 5 PHD Filter and Its Implementation

these are only summarized here. It is assumed that each target follows a linear
Gaussian motion model,
pk+1|k (xk+1 |xk ) = N (xk+1 ; Fk xk , Qk ), (5.22)
where Fk is the state transition matrix and Qk is the process noise covariance, and
it is assumed that all target motions are statistically independent. Observations
and clutter are assumed statistically independent and the sensor is assumed to
have a linear Gaussian measurement model, i.e.,
pk|k (zk |xk ) = N (zk ; Hk xk , Rk ), (5.23)
where Hk is the measurement model matrix and Rk is the observation noise co-
variance. Furthermore it is assumed that the survival and detection probabilities
are state independent, i.e., PS (x) = PS and PD (x) = PD . The phd of the birth and
the spawn are Gaussian mixtures
Jb,k
(i) (i) (i)
X
b(Xk+1 ) = wb,k N (x; µb,k , Pb,k ), (5.24)
i=1
Jβ,k+1
(i) (`) (i,`) (i,`)
X
b(Xk+1 |xk ) = wβ,k+1 N (x; µβ,k+1|k , Pβ,k+1|k ), (5.25)
`=1
(i) (i) (i)
where wb,k , µb,k and Pb,k are the weight, the mean and the covariance of the ith
born component, i = 1, . . . , Jb,k , and Jb,k is the number of components. Further,
(`) (i,`) (i,`)
wβ,k+1 , µβ,k+1|k and Pβ,k+1|k are the weight, mean and covariance of the `th com-
ponent ` = 1, . . . , Jβ,k+1 at time k + 1 spawned from the ith component at time k,
and Jβ,k+1 is the total number of components spawned from component i. It is
also assumed that the predicted multitarget rfs are Poisson.

5.3.2 GM-PHD Filter Algorithm

In this section the gm-phd filter algorithm including the predictor and the cor-
rector equations, are given. The prediction equation is an approximation of the
general phd prediction equation (5.19), with the difference that the phd compo-
nents are substituted with Gaussian mixtures according to
Jb,k Jk−1
(j) (j) (j) (j) (j) (j)
X X
Dk|k−1 (x|Z1:k−1 ) = wb,k N (x; µb,k , Pb,k ) + PS wk−1 N (x; µk|k−1 , Pk|k−1 )
j=1 j=1
Jβ,k
Jk−1 X
(j,`) (j,`) (j,`)
X
+ wβ,k|k−1 N (x; µβ,k|k−1 , Pβ,k|k−1 ). (5.26)
j=1 `=1

In the case when the motion and sensor models are linear, the Gaussian compo-
nents can be predicted using the prediction step of the Kalman filter (3.16). The
illustration in Figure 5.4 still holds for this Gaussian case, and the discussion is
therefore not repeated again. The corrector equation is an approximation of the
5.3 Gaussian Mixture Implementation 73

general phd corrector equation (5.20), with the difference that the phd compo-
nents are substituted with Gaussian mixtures according to
JX
k|k−1  Nz JX
 X k|k−1  
(i) (i) (i) (i,j) (i,j) (i,j)
Dk|k (x|Z1:k ) ≈ wk|k N µk|k , Pk|k + wk|k N µk|k , Pk|k (5.27)
i=1 j=1 i=1
| {z } | {z }
not detected targets detected targets

The complete filter recursion, as it is presented by Vo and Ma (2006), is given in


Algorithm 7.

Algorithm 7 Gaussian Mixture phd filter


k−1(i) (i) (i) J
Require: the Gaussians {wk−1 , µk−1 , Pk−1 }i=1 and the measurement set Zk
Prediction for birth and spawn targets
1: i = 0
2: for j = 1 to Jb,k do
3: i = i+1
(i) (j) (i) (j) (i) (j)
4: wk|k−1 = wb,k , µk|k−1 = µb,k , Pk|k−1 = Pb,k
5: end for
6: for j = 1 to Jβ,k do
7: for ` = 1 to Jk−1 do
8: i = i+1
(i) (`) (j)
9: wk|k−1 = wk−1 wβ,k
(i) (j) (`) (j) (i) (j) (j) (`) (j)
10: µk|k−1 = Fβ,k−1 µk−1 + dβ,k−1 , Pk|k−1 = Qβ,k−1 + Fβ,k−1 Pβ,k−1 (Fβ,k−1 )T
11: end for
12: end for
Prediction for existing targets
13: for j = 1 to Jk−1 do
14: i = i+1
(i) (j)
15: wk|k−1 = PS wk−1
(i) (j) T (i) (j)
16: µk|k−1 = Fk−1 µk−1 , Pk|k−1 = Fk−1 Pk−1 Fk−1 + Qk−1
17: end for
18: Jk|k−1 = i
Construction of phd update components
19: for j = 1 to Jk|k−1 do
(j) (j) (j) (j)
20: ηk|k−1 = Hk µk|k−1 , Sk = Rk + Hk Pk|k−1 HkT
(j) (j) (j) (j) (j) (j)
21: Kk = Pk|k−1 HkT (Sk )−1 , Pk|k = (I − Kk Hk )Pk|k−1
22: end for
Update non-detected targets
23: for j = 1 to Jk|k−1 do
(j) (j)
24: wk = (1 − PD )wk|k−1
(j) (j) (j) (j)
25: µk = µk|k−1 , Pk = Pk|k−1
26: end for
74 5 PHD Filter and Its Implementation

Update detected targets


27: `=0
28: for each z ∈ Zk do
29: ` = `+1
30: for j = 1 to Jk|k−1 do
(`Jk|k−1 +j) (j) (j)
31: wk = PD wk|k−1 N (z; ηk|k−1 , Sk )
(`Jk|k−1 +j) (j) (j) (j)
32: µk = µk|k−1 + Kk (zk − ηk|k−1 )
(`J +j) (j)
33: Pk k|k−1 = Pk|k
34: end for
35: for j = 1 to Jk|k−1 do
(`Jk|k−1 +j)
(`Jk|k−1 +j) wk
36: wk = PJk|k−1 (`Jk|k−1 +i)
λc(z)+ i=1 wk
37: end for
38: end for
39: Jk = `Jk|k−1 + Jk|k−1
(i) (i)
k (i) J
40: return {wk , µk , Pk }i=1

To allow for nonlinear motion and sensor models, the ekf or ukf can be utilized,
in that case the prediction and update of the single Gaussian components are in-
stead performed as described in Algorithm 4 and 5, respectively. The ukf version
of the gm-phd filter is used in Paper D to track the edges of a road and thereby
to create an intensity map of the environment. An extension of the kf based
gm-phd used to track extended targets, is presented Paper E. In that paper mod-
ifications are made to the update step to account for the extended targets, which
may give rise to more than one measurement per target and time step. In Paper F
the gm-phd filter is used as an inner core in a tracking filter which aims at esti-
mating the size and shape of targets. The outer shell of the approach represents
the position and motion of the target, whereas the phd is defined on the surface
of the target as a means to estimate its shape.

5.3.3 Merge, Prune and Extract Targets


A few approximations and modifications to the gm-phd filter must be made in or-
der to obtain a computationally realizable filter. The number of Gaussian compo-
nents grows rapidly, since each predicted Gaussian component is updated with
each measurement. To handle this situation components with low weights are
pruned and similar components are merged. The procedure is described in Vo
and Ma (2006) and in Algorithm 8. A model based approach to merge compo-
nents is presented in Paper D.
The extraction of multiple-target state estimates is straightforward from the pos-
terior phd. It can be assumed that closely spaced components are reasonable
well separated after the prune and merge step of the algorithm. The means of
the remaining Gaussian components are the local maxima of the phd. The states
are extracted by selecting the means of the Gaussians that have weights greater
5.3 Gaussian Mixture Implementation 75

Algorithm 8 Merging and Pruning for gm-phd filter


(i)
k|k (i) (i) J
Require: the Gaussian components {wk , µk , Pk }i=1 , the truncation threshold δt
and the merging threshold δm
(i)
1: initiate: by setting ` = 0, I = {i = 1, . . . , Jk|k |wk > δt } and then
2: repeat
3: ` := ` + 1
(i)
4: j := arg max wk
( i∈I  T  −1   )
(i) (j) (i) (i) (j)
5: L := i ∈ I µk − µk Pk µ k − µ k ≤ δm ,
(`) P (i)
6: w̃k = i∈L wk
(`) 1 P (i) (i)
7: µ̃k = (`) i∈L wk µk
w̃k
  T !
(`) (i) (i) (`) (i) (`) (i)
P˜k = 1 P
8: (`) i∈L wk Pk + µ̃k − µk µ̃k − µk
w̃k
9: I := I\L
10: until I = ∅
(i) (i) (i)
11: if ` > Jmax , the then chose the Gaussians {w̃k , µ̃k , P˜k }`i=1 with the largest
weights.
(i) (i) (i)
12: return the Gaussian components {w̃k , µ̃k , P˜k }`i=1

than some threshold, see Algorithm 9. An advantage with the phd filter is that
a hard decision, i.e., extraction of targets, is only performed for obtaining an out-
put from the filter at each time step. The extracted targets are not used within the
filter; the information about the intensity of targets remains in the filter without
the need to make hard decisions, which would destroy information.
76 5 PHD Filter and Its Implementation

Algorithm 9 Multitarget State Extraction


(i) (i) k(i) J
Require: {wk , µk , Pk }i=1 and a threshold δe
1: Set Xk = ∅
b
2: for i = 1 to Jk do
(i)
3: if wk > δe then
(i)
4: for j = 1 to round(wk ) do
h i
(i)
5: update b Xk = bXk µk
6: end for
7: end if
8: end for
9: return the multitarget state estimate b
Xk
Concluding Remarks
6
In the first part an overview of the basics behind the research reported in this
thesis has been presented. This part also aims at explaining how the papers in
Part II relate to each other and to the existing theory. A conclusion of the re-
sults presented in this thesis is given in Section 6.1 and ideas for future work are
discussed in Section 6.2.

6.1 Conclusion
The work presented in this thesis has dealt with the problem of estimating the
motion and parameters of a vehicle, as well as representing and estimating its
surroundings. In this context the surroundings consist of other vehicles, station-
ary objects, and the road. Here, a major part of the work is not only the estima-
tion problem itself, but also the way in which to represent the environment, i.e.,
building maps of stationary objects and describing the size of other vehicles.
The second part of the thesis begins in Paper A with an overview of mapping
techniques used to describe a road and its closer environment. Four different
types of mapping philosophies are described; beginning with so called feature
based maps, where each element of the map both described the properties and
location of the specific map-element. In location based maps each map-element
already corresponds to a certain location and the value of the element describes
only the properties in that position. Hence, feature based maps are usually rather
sparse. Furthermore, road maps and intensity based maps are also summarized.
The intensity based map is the main topic of paper D.
Paper B is concerned with estimating the lane geometry. The lane markings are
described by a polynomial and the coefficients are the states to estimate. This

77
78 6 Concluding Remarks

problem can be solved with a camera and computer vision, but by fusing the
data obtained from the image processing with information about the ego vehi-
cle’s motion and the other vehicles’ movement on the road, the road geometry
estimate can be improved. The other vehicles are tracked primarily by using
measurements from a radar. The motion of the ego vehicle is estimated by com-
bining measurements from the vehicle’s imu, steering wheel angle sensor and
wheel velocity sensors in a model based filter. The model is in this case the so
called single track model or bicycle model.

Paper C and D both deal with the problems of estimating and representing sta-
tionary objects along the edges of a road. Paper C, associates the radar measure-
ments to extended stationary objects in the form of curved lines and tracks these
lines as extended targets. The lines are modeled as polynomials and one novelty
in this paper is that the approach leads to a so called errors in variables problem
when noisy point measurements are associated to polynomials. The standard ap-
proach is to assume that the measurements are only noisy in one direction when
associated to a line, here however noise is assumed to be present in all directions.
This assumption is of course more realistic and the results are more accurate.

Paper D describes a new type of map, the intensity based map. In this paper the
environment, i.e., the stationary objects, is represented by an intensity function
or probability hypothesis density (phd) as it is also called. The intensity function
is described by a Gaussian mixture. This makes the representation very sparse,
since only a number of sufficient statistics must be tracked, but also very rich
since the extension of the Gaussian components can cover and model larger areas.
It is shown how prior knowledge of the road construction can be used to both
improve the update of the filter and to simplify the map representation.

Paper E and F deal with the tracking of extended moving targets. In Paper E the
target is still modeled as a point target, however, it allows for obtaining more than
one measurement per target. A modification of the pseudo-likelihood function
used in the update step of the so called phd filter is presented. In the standard
filter one measurement can only be obtained from each target per time step. The
novelty of the paper is the practical implementation of the pseudo-likelihood
function and partitioning of the measurement data. In a number of simulations
and experiments the advantages of the modifications are shown. A drawback is
that the wrong number of targets is estimated in some rare cases, this is further
discussed in the next section outlining some possible directions of future work.

In Paper F, the size and the shape of an extended target are also estimated. It
is still assumed that the sensor obtains a number of point measurements. In or-
der to associate the point measurements to the shaped target it is assumed that
the measurements have been produced by a number of measurement generating
points on the surface of the target. The measurement generating points are repre-
sented by a random finite set and these are only used as a means to estimate the
size and the shape of the target. Only one simple simulation example is shown
and also here future work will cover more situations.
6.2 Future Research 79

The last paper, Paper G, aims at estimating the wheel radii of a vehicle. This is
made under the assumption that a change in the wheel radii is related to a change
in tire pressure, and that tire pressure losses can be detected using this approach.
The proposed method is only based on measurements from the wheel speed sen-
sors and the gps position information. The wheel radii are represented with noise
parameters in the state space model. The novelty lies in a Bayesian approach to
estimate these time-varying noise parameters on-line using a marginalized parti-
cle filter. Experimental results of the proposed approach show many advantages,
both regarding the accuracy and the reduced computational complexity, when it
is compared with other methods.
The approaches in Papers A-D, and G have been evaluated on real data from both
freeways and rural roads in Sweden. In Paper E data of moving pedestrians has
been used. Of the seven publications in Part II six are journal papers and one is
a peer-reviewed conference paper. Five of the papers are published and two are
still under review.

6.2 Future Research


The radar and camera data used in this thesis is generally preprocessed. How-
ever, the preprocessing is not covered in this thesis. Specifically, more effort can
be spent on the image processing to increase the information content. For exam-
ple within the area of odometry the estimate could be more accurate if the camera
information is used in addition to the imu measurements. This is called visual
odometry and it would probably improve the estimate of the body side slip angles,
especially during extreme maneuvers where the tire road interaction is strongly
nonlinear. Since only one camera is used, the inverse depth parametrization in-
troduced by Civera et al. (2008) is an interesting approach, see e.g., Schön and
Roll (2009); Nilsson et al. (2011) for automotive examples on visual odometry.
To verify the state estimates, more accurate reference values are needed as well.
Furthermore, the methods presented in Paper G could possibly be used as a ba-
sis to more accurately describe the road-tire interaction and thereby improve the
estimate of the slip of the vehicle.
Different aspects to estimate extended objects have been discussed in this thesis,
however an efficient solution on how to estimate the shape of any possible objects
has not been found. Paper F, is closest to this goal, but more experimental studies
must be performed and the computational time must be reduced by finding more
efficient forms of representation. Also, as concluded in Paper E, the number of
targets is falsely estimated in some rare cases, e.g., in the case of occlusion or not
well separated targets. In that specific case, the so called cardinalized phd filter
(cphd) is assumed to improve the results drastically, see Orguner et al. (2011).
However, more experimental results must be obtained before conclusions can be
drawn.
Currently there is a lot of activity within the computer vision community to en-
able non-planar road models, making use of parametric road models similar to
80 6 Concluding Remarks

the ones used in this thesis. A very interesting avenue for future work is to com-
bine the ideas presented in this thesis with information from a camera about the
height differences on the road side within a sensor fusion framework. This would
probably improve the estimates, especially in situations when there are too few
radar measurements available.
Bibliography

A. Abdulle and G. Wanner. 200 years of least squares method. Elemente der
Mathematik, 57:45–60, May 2002.

M. Adams, W. S. Wijesoma, and A. Shacklock. Autonomous navigation: Achieve-


ments in complex environments. IEEE Instrumentation & Measurement Mag-
azine, 10(3):15–21, June 2007.

M. Ahrholdt, F. Bengtsson, L. Danielsson, and C. Lundquist. SEFS – results on sen-


sor data fusion system development. In Proceedings of the World Congress on
Intelligent Transportation Systems and Services, Stockholm, Sweden, Septem-
ber 2009.

B. D. O. Anderson and J. B. Moore. Optimal Filtering. Information and system


science series. Prentice Hall, Englewood Cliffs, NJ, USA, 1979.

D. Angelova and L. Mihaylova. Extended object tracking using Monte Carlo meth-
ods. IEEE Transactions on Signal Processing, 56(2):825–832, February 2008.

T. Ardeshiri, F. Larsson, F. Gustafsson, T. B. Schön, and M. Felsberg. Bicycle track-


ing using ellipse extraction. In Proceedings of the International Conference on
Information Fusion, pages 1–8, Chicago, IL, USA, July 2011.

M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial on particle


filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions
on Signal Processing, 50(2):174–188, February 2002.

T. Bailey and H. Durrant-Whyte. Simultaneous localization and mapping (SLAM):


Part II. IEEE Robotics & Automation Magazine, 13(3):108–117, September
2006.

Y. Bar-Shalom and T. E. Fortmann. Tracking and Data Association. Mathematics


in science and engineering. Academic Press, Orlando, FL, USA, 1988.

Y. Bar-Shalom and E. Tse. Tracking in a cluttered environment with probabilistic


data association. Automatica, 11(5):451–460, September 1975.

81
82 Bibliography

Y. Bar-Shalom, X. Rong Li, and T. Kirubarajan. Estimation with Applications to


Tracking and Navigation. John Wiley & Sons, New York, NY, USA, 2001.

M. Baum and U. D. Hanebeck. Random hypersurface models for extended object


tracking. In IEEE International Symposium on Signal Processing and Infor-
mation Technology, pages 178–183, Ajman, United Arab Emirates, December
2009.

M. Baum and U. D. Hanebeck. Shape tracking of extended objects and group


targets with star-convex RHMs. In Proceedings of the International Conference
on Information Fusion, pages 338–345, Chicago, IL, USA, July 2011.

M. Baum, B. Noack, and U. D. Hanebeck. Extended object and group tracking


with elliptic random hypersurface models. In Proceedings of the International
Conference on Information Fusion, pages 1–8, Edinburgh, UK, July 2010.

T. Bayes. An essay towards solving a problem in the doctrine of chances. The


Philosophical Transactions, 53:370–418, 1763.

R. Behringer. Visuelle Erkennung und Interpretation des Fahrspurverlaufes


durch Rechnersehen für ein autonomes Straßenfahrzeug, volume 310 of
Fortschrittsberichte VDI, Reihe 12. VDI Verlag, Düsseldorf, Germany, 1997.
Also as: PhD Thesis, Universität der Bundeswehr, 1996.

F. Bengtsson. Models for tracking in automotive safety systems. Licentiate Thesis


No R012/2008, Department of Signals and Systems, Chalmers University of
Technology, 2008.

F. Bengtsson and L. Danielsson. Designing a real time sensor data fusion system
with application to automotive safety. In Proceedings of the World Congress
on Intelligent Transportation Systems and Services, New York, USA, November
2008.

D. P. Bertsekas. The auction algorithm for assignment and other network flow
problem: a tutorial. Interfaces, 20(4):133–149, July 1990.

S. S. Blackman and R. Popoli. Design and Analysis of Modern Tracking Systems.


Artech House, Norwood, MA, USA, 1999.

Y. Boers, H. Driessen, J. Torstensson, M. Trieb, R. Karlsson, and F. Gustafsson.


Track-before-detect algorithm for tracking extended targets. IEE Proceedings
of Radar, Sonar and Navigation, 153(4):345–351, August 2006.

M. Bühren and B. Yang. Simulation of automotive radar target lists using a novel
approach of object representation. In Proceedings of the IEEE Intelligent Vehi-
cles Symposium, pages 314–319, Tokyo, Japan, 2006.

O. Cappe, S. J. Godsill, and E. Moulines. An overview of existing methods and


recent advances in sequential Monte Carlo. Proceedings of the IEEE, 95(5):
899–924, May 2007.
Bibliography 83

S. Challa, M. R. Morelande, D. Musicki, and R. J. Evans. Fundamentals of Object


Tracking. Cambridge University Press, Cambridge, UK, 2011.
J. Civera, A. J. Davison, and J. Montiel. Inverse depth parametrization for monoc-
ular SLAM. IEEE Transactions on Robotics, 24(5):932–945, October 2008.
D. E. Clark. First-moment multi-object forward-backward smoothing. In Pro-
ceedings of the International Conference on Information Fusion, pages 1–6,
Edinburgh, UK, July 2010.
D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes.
Vol. 1 , Elementary theory and method. Springer, New York, NY, USA, 2 edition,
2003.
L. Danielsson. Tracking Theory for Preventive Safety Systems. Licentiate Thesis
No R004/2008, Department of Signals and Systems, Chalmers University of
Technology, 2008.
L. Danielsson. Tracking and radar sensor modelling for automotive safety sys-
tems. PhD thesis No 3064, Department of Signals and Systems, Chalmers Uni-
versity of Technology, 2010.
J. Degerman, J. Wintenby, and D. Svensson. Extended target tracking using prin-
cipal components. In Proceedings of the International Conference on Informa-
tion Fusion, pages 1–8, Chicago, IL, USA, July 2011.
J. C. Dezert. Tracking maneuvering and bending extended target in cluttered
environment. In Proceedings of Signal and Data Processing of Small Targets,
volume 3373, pages 283–294, Orlando, FL, USA, April 1998. SPIE.
E. D. Dickmanns. Dynamic Vision for Perception and Control of Motion.
Springer, London, UK, 2007.
P. M. Djuric, J. H. Kotecha, J. Zhang, Y. Huang, T. Ghirmai, M. F. Bugallo, and
J. Miguez. Particle filtering. Signal Processing Magazine, IEEE, 20(5):19–38,
September 2003.
A. Doucet, S. J. Godsill, and C. Andrieu. On sequential Monte Carlo sampling
methods for Bayesian filtering. Statistics and Computing, 10(3):197–208, 2000.
A. Doucet, N. de Freitas, and N. Gordon, editors. Sequential Monte Carlo Meth-
ods in Practice. Springer Verlag, New York, USA, 2001.
O. E. Drummond, S. S. Blackman, and G. C. Pretrisor. Tracking clusters and
extended objects with multiple sensors. In Proceedings of Signal and Data
Processing of Small Targets, volume 1305, pages 362–375, Orlando, FL, USA,
January 1990. SPIE.
H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping (SLAM):
Part I. IEEE Robotics & Automation Magazine, 13(2):99–110, June 2006.
A. Eidehall. Tracking and threat assessment for automotive collision avoidance.
84 Bibliography

PhD thesis No 1066, Linköping Studies in Science and Technology, Linköping,


Sweden, January 2007.
A. Eidehall and F. Gustafsson. Obtaining reference road geometry parameters
from recorded sensor data. In Proceedings of the IEEE Intelligent Vehicles
Symposium, pages 256–260, Tokyo, Japan, June 2006.
A. Eidehall, J. Pohl, and F. Gustafsson. Joint road geometry estimation and vehicle
tracking. Control Engineering Practice, 15(12):1484–1494, December 2007.
O. Erdinc, P. Willett, and Y. Bar-Shalom. The bin-occupancy filter and its con-
nection to the PHD filters. IEEE Transactions on Signal Processing, 57(11):
4232–4246, November 2009.
M. Feldmann, D. Fränken, and W. Koch. Tracking of extended objects and group
targets using random matrices. IEEE Transactions on Signal Processing, 59(4):
1409–1420, April 2011.
R. A. Fisher. On an absolute criterion for fitting frequency curves. Messenger of
Mathematics, 41:155–160, 1912.
R. A. Fisher. On the mathematical foundations of theoretical statistics. Philo-
sophical Transactions of the Royal Society Series A, 222:309–368, 1922.
T. Fortmann, Y. Bar-Shalom, and M. Scheffe. Sonar tracking of multiple targets
using joint probabilistic data association. IEEE Journal of Ocean Engineering,
8(3):173–184, July 1983.
A. Gern, U. Franke, and P. Levi. Advanced lane recognition - fusing vision and
radar. In Proceedings of the IEEE Intelligent Vehicles Symposium, pages 45–51,
Dearborn, MI, USA, October 2000.
A. Gern, U. Franke, and P. Levi. Robust vehicle tracking fusing radar and vision.
In Proceedings of the international conference of multisensor fusion and inte-
gration for intelligent systems, pages 323–328, Baden-Baden, Germany, August
2001.
K. Gilholm and D. Salmond. Spatial distribution model for tracking extended
objects. IEE Proceedings of Radar, Sonar and Navigation, 152(5):364–371, Oc-
tober 2005.
K. Gilholm, S. Godsill, S. Maskell, and D. Salmond. Poisson models for extended
target and group tracking. In Proceedings of Signal and Data Processing of
Small Targets, volume 5913, pages 230–241, San Diego, CA, USA, August 2005.
SPIE.
T. D. Gillespie. Fundamentals of Vehicle Dynamics. SAE Society of Automotive
Engineers, Warrendale, PA, USA, 1992.
I. R. Goodman, R. P. S. Mahler, and H. T. Nguyen. Mathematics of data fusion.
Kluwer Academic, Dordrecht, 1997.
Bibliography 85

N. J. Gordon, D. J. Salmond, and A. F. M. Smith. Novel approach to


nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings on Radar
and Signal Processing, 140(5):107–113, April 1993.
K. Granström, C. Lundquist, and U Orguner. A Gaussian mixture PHD filter for
extended target tracking. In Proceedings of the International Conference on
Information Fusion, Edinburgh, UK, July 2010.
K. Granström, C. Lundquist, and U. Orguner. Extended target tracking using a
Gaussian-mixture PHD filter. IEEE Transactions on Aerospace and Electronic
Systems, 2011a. Under review.
K. Granström, C. Lundquist, and U Orguner. Tracking rectangular and elliptical
extended targets using laser measurements. In Proceedings of the International
Conference on Information Fusion, Chicago, IL, USA, July 2011b.
J. Gunnarsson. Models and Algorithms - with applications to vehicle tracking
and frequency estimation. PhD thesis No 2628, Department of Signals and
Systems, Chalmers University of Technology, June 2007.
J. Gunnarsson, L. Svensson, E Bengtsson, and L. Danielsson. Joint driver inten-
tion classification and tracking of vehicles. In IEEE Nonlinear Statistical Signal
Processing Workshop, pages 95–98, September 2006.
J. Gunnarsson, L. Svensson, L. Danielsson, and F. Bengtsson. Tracking vehicles
using radar detections. In Proceedings of the IEEE Intelligent Vehicles Sympo-
sium, pages 296–302, Istanbul, Turkey, June 2007.
F. Gustafsson. Adaptive Filtering and Change Detection. John Wiley & Sons, New
York, USA, 2000.
F. Gustafsson. Automotive safety systems. IEEE Signal Processing Magazine, 26
(4):32–47, July 2009.
F. Gustafsson. Particle filter theory and practice with positioning applications.
IEEE Transactions on Aerospace and Electronic Systems, 25(7):53–82, July
2010.
G. Hendeby. Performance and Implementation Aspects of Nonlinear Filtering.
PhD thesis No 1161, Linköping Studies in Science and Technology, Linköping,
Sweden, February 2008.
J. Jansson. Collision Avoidance Theory with Applications to Automotive Colli-
sion Mitigation. PhD thesis No 950, Linköping Studies in Science and Technol-
ogy, Linköping, Sweden, June 2005.
A. H. Jazwinski. Stochastic processes and filtering theory. Mathematics in science
and engineering. Academic Press, New York, USA, 1970.
K. H. Johansson, M. Törngren, and L. Nielsen. Vehicle applications of controller
area network. In D. Hristu-Varsakelis and W. S. Levine, editors, Handbook of
Networked and Embedded Control Systems, pages 741–765. Birkhäuser, 2005.
86 Bibliography

S. J. Julier. The scaled unscented transformation. In Proceedings of the American


Control Conference, volume 6, pages 4555–4559, 2002.

S. J. Julier and J. K. Uhlmann. New extension of the Kalman filter to nonlinear sys-
tems. In Signal Processing, Sensor Fusion, and Target Recognition VI, volume
3068, pages 182–193, Orlando, FL, USA, 1997. SPIE.

S. J. Julier and J. K. Uhlmann. Reduced sigma point filters for the propagation of
means and covariances through nonlinear transformations. In Proceedings of
the American Control Conference, volume 2, pages 887–892, 2002.

S. J. Julier and J. K. Uhlmann. Unscented filtering and nonlinear estimation. Pro-


ceedings of the IEEE, 92(3):401–422, March 2004.

S. J. Julier, J. K. Uhlmann, and H. F. Durrant-Whyte. A new approach for filtering


nonlinear systems. In American Control Conference, 1995. Proceedings of the,
volume 3, pages 1628–1632, Jun 1995.

T. Kailath. Linear systems. Prentice Hall, Englewood Cliffs, NJ, USA, 1980.

T. Kailath, A. H. Sayed, and B. Hassibi. Linear Estimation. Information and


System Sciences Series. Prentice Hall, Upper Saddle River, NJ, USA, 2000.

R. E. Kalman. A new approach to linear filtering and prediction problems. Trans-


actions of the ASME, Journal of Basic Engineering, 82:35–45, 1960.

R. Karlsson. Particle Filtering for Positioning and Tracking Applications. PhD


thesis No 924, Linköping Studies in Science and Technology, Linköping, Swe-
den, March 2005.

S. M. Kay. Fundamentals of Statistical Signal Processing, Volume I: Estimation


Theory. Prentice Hall Signal Processing. Prentice Hall, Upper Saddle River, NJ,
USA, 1993.

U. Kiencke, S. Dais, and M. Litschel. Automotive serial controller area network.


Technical Report 860391, SAE International Congress, 1986.

J. W. Koch. Bayesian approach to extended object and cluster tracking using


random matrices. IEEE Transactions on Aerospace and Electronic Systems, 44
(3):1042–1059, July 2008.

M. E. Liggins, D. L. Hall, and J. Llinas. Handbook of multisensor data fusion :


theory and practice. CRC Press, Boca Raton, FL, USA, 2 edition, 2009.

L. Lin, Y. Bar-Shalom, and T. Kirubarajan. Data association combined with the


probability hypothesis density filter for multitarget tracking. In Proceedings
of Signal and Data Processing of Small Targets, volume 5428, pages 464–475,
Orlando, FL, USA, 2004. SPIE.

L. Ljung. System identification, Theory for the user. System sciences series. Pren-
tice Hall, Upper Saddle River, NJ, USA, 2 edition, 1999.
Bibliography 87

C. Lundquist. Automotive Sensor Fusion for Situation Awareness. Licentiate


Thesis No 1422, Department of Electrical Engineering, Linköping University,
Sweden, 2009.

C. Lundquist. Method for stabilizing a vehicle combination. U.S.


Patent US 8010253 August 30, 2011 and German Patent Application
DE 102007008342 August 21, 2008, Priority date February 20, 2007.

C. Lundquist and R. Großheim. Method and device for determining steering an-
gle information. German Patent Application DE 10 2007 000 958 Mai 14, 2009,
International Patent Application WO 2009 047 020 April 16, 2009 and Euro-
pean Patent Application EP 2205478 April 16, 2009, Priority date October 2,
2007.

C. Lundquist and W. Reinelt. Back driving assistant for passenger cars with
trailer. In Proceedings of the SAE World Congress, SAE paper 2006-01-0940,
Detroit, MI, USA, April 2006a.

C. Lundquist and W. Reinelt. Rückwärtsfahrassistent für PKW mit Aktive Front


Steering. In Proceedings of the AUTOREG (Steuerung und Regelung von
Fahrzeugen und Motoren, VDI Bericht 1931, pages 45–54, Wiesloch, Germany,
March 2006b.

C. Lundquist and W. Reinelt. Electric motor rotor position monitoring method


for electrically aided steering system e.g. steer by wire, for motor vehicle,
involves outputting alarm when difference between measurement value and
estimated value of motor exceeds threshold. German Patent Application
DE 102005016514 October 12, 2006, Priority date April 8, 2006.

C. Lundquist and T. B. Schön. Road geometry estimation and vehicle tracking


using a single track model. In Proceedings of the IEEE Intelligent Vehicles
Symposium, pages 144–149, Eindhoven, The Netherlands, June 2008.

C. Lundquist and T. B. Schön. Estimation of the free space in front of a moving


vehicle. In Proceedings of the SAE World Congress, SAE paper 2009-01-1288,
Detroit, MI, USA, April 2009a.

C. Lundquist and T. B. Schön. Recursive identification of cornering stiffness pa-


rameters for an enhanced single track model. In Proceedings of the IFAC Sym-
posium on System Identification, pages 1726–1731, Saint-Malo, France, July
2009b.

C. Lundquist and T. B. Schön. Joint ego-motion and road geometry estimation.


Information Fusion, 12:253–263, October 2011.

C. Lundquist, U. Orguner, and T. B. Schön. Tracking stationary extended objects


for road mapping using radar measurements. In Proceedings of the IEEE In-
telligent Vehicles Symposium, pages 405–410, Xi’an, China, June 2009.

C. Lundquist, L. Danielsson, and F. Gustafsson. Random set based road mapping


88 Bibliography

using radar measurements. In Proceedings of the European Signal Processing


Conference, pages 219–223, Aalborg, Denmark, August 2010a.
C. Lundquist, U. Orguner, and F Gustafsson. Estimating polynomial structures
from radar data. In Proceedings of the International Conference on Informa-
tion Fusion, Edinburgh, UK, July 2010b.
C. Lundquist, K. Granström, and U. Orguner. Estimating the shape of targets with
a PHD filter. In Proceedings of the International Conference on Information
Fusion, Chicago, IL, USA, July 2011a.
C. Lundquist, L. Hammarstrand, and F. Gustafsson. Road intensity based map-
ping using radar measurements with a probability hypothesis density filter.
IEEE Transactions on Signal Processing, 59(4):1397–1408, April 2011b.
C. Lundquist, U. Orguner, and F. Gustafsson. Extended target tracking using
polynomials with applications to road-map estimation. IEEE Transactions on
Signal Processing, 59(1):15–26, January 2011c.
C. Lundquist, E. Özkan, and F. Gustafsson. Tire radii estimation using a marginal-
ized particle filter. IEEE Transactions on Intelligent Transportation Systems,
2011d. Submitted.
C. Lundquist, T. B. Schön, and F. Gustafsson. Situational awareness and road
prediction for trajectory control applications. In A. Eskandarian, editor, Hand-
book of Intelligent Vehicles, chapter 24. Springer, November 2011e.
C. Lundquist, M. Skoglund, K. Granström, and T. Glad. How peer-review affects
student learning. In Utvecklingskonferens för Sveriges ingenjörsutbildingar,
Norrköping, Sweden, November 2011f.
R. P. S. Mahler. Multitarget Bayes filtering via first-order multitarget moments.
IEEE Transactions on Aerospace and Electronic Systems, 39(4):1152–1178, Oc-
tober 2003.
R. P. S. Mahler. Statistics 101 for multisensor, multitarget data fusion. IEEE
Transactions on Aerospace and Electronic Systems, 19(1):53–64, January 2004.
R. P. S. Mahler. Statistical Multisource-Multitarget Information Fusion. Artech
House, Boston, MA, USA, 2007a.
R. P. S. Mahler. PHD filters of higher order in target number. IEEE Transactions
on Aerospace and Electronic Systems, 43(4):1523–1543, October 2007b.
R. P. S. Mahler. PHD filters for nonstandard targets, I: Extended targets. In
Proceedings of the International Conference on Information Fusion, pages 915–
921, Seattle, WA, USA, July 2009a.
R. P. S. Mahler. Random set theory for multisource-multitarget information fu-
sion. In M. E. Liggins, D. L. Hall, and J. Llinas, editors, Handbook of multisen-
sor data fusion: theory and practice, chapter 16. CRC Press, Boca Raton, FL, 2
edition, November 2009b.
Bibliography 89

R. P. S. Mahler, B.-N. Vo, and B.-T. Vo. The forward-backward probability hy-
pothesis density smoother. In Proceedings of the International Conference on
Information Fusion, pages 1–8, Edinburgh, UK, July 2010.
S. Malinen, C. Lundquist, and W. Reinelt. Fault detection of a steering wheel
sensor signal in an active front steering system. In Preprints of the IFAC Sym-
posium on SAFEPROCESS, pages 547–552, Beijing, China, August 2006.
M. Mitschke and H. Wallentowitz. Dynamik der Kraftfahrzeuge. Springer, Berlin,
Heidelberg, 4 edition, 2004.
E. Nilsson, C. Lundquist, T. B. Schön, D. Forslund, and J. Roll. Vehicle motion
estimation using an infrared camera. In Proceedings of the World Congress of
the International Federation of Automatic Control, Milan, Italy, August 2011.
U. Orguner, C. Lundquist, and K. Granström. Extended target tracking with a
cardinalized probability hypothesis density filter. In Proceedings of the In-
ternational Conference on Information Fusion, pages 65–72, Chicago, IL, USA,
July 2011.
E. Özkan, C. Lundquist, and F. Gustafsson. A Bayesian approach to jointly esti-
mate tire radii and vehicle trajectory. In Proceedings of the IEEE Conference
on Intelligent Transportation Systems, Washington DC, USA, October 2011.
K. Panta, B.-N. Vo, S. Singh, and A. Doucet. Probability hypothesis density filter
versus multiple hypothesis tracking. In Signal Processing, Sensor Fusion, and
Target Recognition XIII, volume 5429, pages 284–295, Orlando, FL, USA, 2004.
SPIE.
R. Rajamani. Vehicle Dynamics and Control. Springer, Boston, MA, USA, 2006.
ISBN 978-0-387-28823-9.
G. Reimann and C. Lundquist. Method for operating electronically controlled
servo steering system of motor vehicle, involves determining steering wheel
angle as measure for desired steering handle angle by steering handle for steer-
ing wheels of motor vehicle. German Patent Application DE 102006053029
May 15, 2008, Priority date November 10, 2006.
W. Reinelt and C. Lundquist. Observer based sensor monitoring in an active front
steering system using explicit sensor failure modeling. In Proceedings of the
IFAC World Congress, Prague, Czech Republic, July 2005.
W. Reinelt and C. Lundquist. Mechatronische Lenksysteme: Modellbildung und
Funktionalität des Active Front Steering. In R. Isermann, editor, Fahrdy-
namik Regelung - Modellbildung, Fahrassistenzsysteme, Mechatronik, pages
213–236. Vieweg Verlag, September 2006a.
W. Reinelt and C. Lundquist. Controllability of active steering system hazards:
From standards to driving tests. In Juan R. Pimintel, editor, Safety Critical
Automotive Systems, pages 173–178. SAE International, 400 Commonwealth
Drive, Warrendale, PA, USA, August 2006b.
90 Bibliography

W. Reinelt and C. Lundquist. Method for assisting the driver of a motor vehicle
with a trailer when reversing. German Patent DE 10 2006 002 294 February 24,
2011, European Patent Application EP 1810913 July 25, 2007 and Japanese
Patent Application JP 2007191143 August 2, 2007, Priority date January 18,
2006.

W. Reinelt, W. Klier, G. Reimann, C. Lundquist, W. Schuster, and R. Großheim.


Active front steering for passenger cars: System modelling and functions.
In Proceedings of the IFAC Symposium on Advances in Automotive Control,
Salerno, Italy, April 2004.

W. Reinelt, C. Lundquist, and H. Johansson. On-line sensor monitoring in an


active front steering system using extended Kalman filtering. In Proceedings
of the SAE World Congress, SAE paper 2005-01-1271, Detroit, MI, USA, April
2005.

W. Reinelt, W. Schuster, R. Großheim, and C. Lundquist. Motor vehicle’s electron-


ically regulated servo steering system operating method, involves comparing
actual value of measured value with stored practical value of corresponding
measured value. German Patent DE 10 2006 040 443 January 27, 2011, Prior-
ity date August 29, 2006.

W. Reinelt, W. Schuster, R. Großheim, and C. Lundquist. Operating method for


electronic servo steering system of vehicle, involves presetting steering wheel
angle by steering mechanism as measure for desired wheel turning angle for
steering wheel of vehicle. German Patent Application DE 102006052092 May 8,
2008, Priority date November 4, 2006.

W. Reinelt, C. Lundquist, and S. Malinen. Automatic generation of a computer


program for monitoring a main program to provide operational safety. German
Patent Application DE 102005049657 April 19, 2007, Priority date October 18,
2005.

W. Reinelt, W. Schuster, R. Großheim, and C. Lundquist. Electronic servo steer-


ing system operating method for motor vehicle, involves recognizing track mis-
alignment of vehicle when forces differentiate around preset value from each
other at preset period of time in magnitude and/or direction. German Patent
DE 102006043069 December 3, 2009, Priority date September 14, 2006.

W. Reinelt, W. Schuster, R. Großheim, and C. Lundquist. Operating method for


electronic power-assisted steering system of vehicle, involves overlapping addi-
tional angle, which is disabled after re-start of utility function. German Patent
Application DE 102006041236 Mars 6, 2008, Priority date September 2, 2006a.

W. Reinelt, W. Schuster, R. Großheim, and C. Lundquist. Operating method for


electronic power-assisted steering system of vehicle, involves re-starting utility
function, and after re-start of utility function superimposition of additional an-
gle is unlatched. German Patent DE 102006041237 December 3, 2009, Priority
date September 2, 2006b.
Bibliography 91

B. Ristic and D. J. Salmond. A study of a nonlinear filtering problem for tracking


an extended target. In Proceedings of the International Conference on Infor-
mation Fusion, pages 503–509, Stockholm, Sweden, June 2004.
B. Ristic, S. Arulampalam, and N. Gordon. Beyond the Kalman Filter: Particle
filters for tracking applications. Artech House, London, UK, 2004.
X. Rong Li and V. P. Jilkov. Survey of maneuvering target tracking: Part III. Mea-
surement models. In Proceedings of Signal and Data Processing of Small Tar-
gets, volume 4473, pages 423–446, San Diego, CA, USA, 2001. SPIE.
X. Rong Li and V. P. Jilkov. Survey of maneuvering target tracking: Part I. Dy-
namic models. IEEE Transactions on Aerospace and Electronic Systems, 39(4):
1333–1364, October 2003.
M. Roth and F. Gustafsson. An efficient implementation of the second order ex-
tended Kalman filter. In Proceedings of the International Conference on Infor-
mation Fusion, pages 1–6, July 2011.
W. J. Rugh. Linear System Theory. Information and system sciences series. Pren-
tice Hall, Upper Saddle River, NJ, USA, 2 edition, 1996.
D. J. Salmond and M. C. Parr. Track maintenance using measurements of tar-
get extent. IEE Proceedings of Radar, Sonar and Navigation, 150(6):389–395,
December 2003.
S. F. Schmidt. Application of state-space methods to navigation problems. Ad-
vances in Control Systems, 3:293–340, 1966.
B. Schofield. Model-Based Vehicle Dynamics Control for Active Safety. PhD
thesis, Department of Automatic Control, Lund University, Sweden, September
2008.
T. B. Schön. Estimation of Nonlinear Dynamic Systems – Theory and Applica-
tions. PhD thesis No 998, Linköping Studies in Science and Technology, De-
partment of Electrical Engineering, Linköping University, Sweden, February
2006.
T. B. Schön and J. Roll. Ego-motion and indirect road geometry estimation using
night vision. In Proceedings of the IEEE Intelligent Vehicles Symposium, pages
30–35, Xi’an, China, June 2009.
T. B. Schön, F. Gustafsson, and P.-J. Nordlund. Marginalized particle filters for
mixed linear/nonlinear state-space models. IEEE Transactions on Signal Pro-
cessing, 53(7):2279–2289, July 2005.
T. B. Schön, A. Eidehall, and F. Gustafsson. Lane departure detection for im-
proved road geometry estimation. In Proceedings of the IEEE Intelligent Vehi-
cles Symposium, pages 546–551, Tokyo, Japan, June 2006.
T. B. Schön, D. Törnqvist, and F. Gustafsson. Fast particle filters for multi-rate
92 Bibliography

sensors. In Proceedings of the 15th European Signal Processing Conference,


Poznań, Poland, September 2007.
H. Sidenbladh. Multi-target particle filtering for the probability hypothesis den-
sity. In Proceedings of the International Conference on Information Fusion,
volume 2, pages 800–806, Cairns, Australia, March 2003.
G. L. Smith, S. F. Schmidt, and L. A. McGee. Application of statistical filter the-
ory to the optimal estimation of position and velocity on board a circumlunar
vehicle. Technical Report TR R-135, NASA, 1962.
J. Sörstedt, L. Svensson, F. Sandblom, and L. Hammarstrand. A new vehicle mo-
tion model for improved predictions and situation assessment. IEEE Trans-
actions on Intelligent Transportation Systems, 2011. ISSN 1524-9050. doi:
10.1109/TITS.2011.2160342.
R. L. Streit. Multisensor multitarget intensity filter. In Proceedings of the In-
ternational Conference on Information Fusion, pages 1–8, Cologne, Germany,
July 2008.
R. L. Streit. Poisson Point Processes. Springer, 1 edition, 2010.
R. L. Streit and L. D. Stone. Bayes derivation of multitarget intensity filters. In
Proceedings of the International Conference on Information Fusion, pages 1–8,
Cologne, Germany, July 2008.
L. Svensson and J. Gunnarsson. A new motion model for tracking of vehicles. In
Proceedings of the 14th IFAC Symposium on System Identification, Newcastle,
Australia, 2006.
A. Swain and D. Clark. Extended object filtering using spatial independent clus-
ter processes. In Proceedings of the International Conference on Information
Fusion, pages 1–8, Edinburgh, UK, July 2010.
S. Thrun. Robotic mapping: A survey. In Exploring Artificial Intelligence in the
New Millenium. Morgan Kaufmann, 2002.
H. L. van Trees. Detection, Estimation, and Modulation Theory. John Wiley &
Sons, New York, NY, USA, 1968.
J. Vermaak, N. Ikoma, and S. J. Godsill. Sequential Monte Carlo framework for
extended object tracking. IEE Proceedings of Radar, Sonar and Navigation, 152
(5):353–363, October 2005.
B.-N. Vo and W.-K. Ma. The Gaussian mixture probability hypothesis density
filter. IEEE Transactions on Signal Processing, 54(11):4091–4104, November
2006.
B.-N. Vo, S. Singh, and A. Doucet. Random finite sets and sequential Monte Carlo
methods in multi-target tracking. In Proceedings of the International Radar
Conference, pages 486–491, Adelaide, Australia, September 2003.
Bibliography 93

B.-N. Vo, B.-T. Vo, and R. P. S. Mahler. A closed form solution to the probability
hypothesis density smoother. In Proceedings of the International Conference
on Information Fusion, pages 1–8, Edinburgh, UK, July 2010.
B.-N. Vo, B.-T. Vo, and R. P. S. Mahler. Closed form solutions to forward-backward
smoothing. IEEE Transactions on Signal Processing, 2011. doi: 10.1109/TSP.
2011.2168519.
B.-T. Vo, B.-N. Vo, and A. Cantoni. Analytic implementations of the cardinalized
probability hypothesis density filter. IEEE Transactions on Signal Processing,
55(7):3553–3567, July 2007.
N. Wahlström, J. Callmer, and F. Gustafsson. Single target tracking using vector
magnetometers. In IEEE Conference on Acoustics, Speech and Signal Process-
ing, pages 4332–4335, Prague, Czech Republic, May 2011.
A. Wald. Sequential tests of statistical hypotheses. The Annals of Mathematical
Statistics, 16(2):117–186, June 1945.
M. J. Waxman and O. E. Drummond. A bibliography of cluster (group) tracking.
In Proceedings of Signal and Data Processing of Small Targets, volume 5428,
pages 551–560, Orlando, FL, USA, April 2004. SPIE.
M. Wieneke and S. J. Davey. Histogram PMHT with target extent estimates based
on random matrices. In Proceedings of the International Conference on Infor-
mation Fusion, pages 1–8, Chicago, IL, USA, July 2011.
J. Y. Wong. Theory Of Ground Vehicles. John Wiley & Sons, New York, USA, 3
edition, 2001.
T. Zajic and R. P. S. Mahler. Particle-systems implementation of the PHD
multitarget-tracking filter. In Signal Processing, Sensor Fusion, and Target
Recognition XII, volume 5096, pages 291–299, Orlando, FL, USA, April 2003.
SPIE.
Z. Zomotor and U. Franke. Sensor fusion for improved vision based lane recogni-
tion and object tracking with range-finders. In Proceedings of the IEEE Confer-
ence on Intelligent Transportation Systems, pages 595–600, Boston, MA, USA,
November 1997.
Part II

Publications
Paper A
Situational Awareness and Road
Prediction for Trajectory Control
Applications

Authors: Christian Lundquist, Thomas B. Schön and Fredrik Gustafsson

Edited version of the paper:


C. Lundquist, T. B. Schön, and F. Gustafsson. Situational awareness and
road prediction for trajectory control applications. In A. Eskandarian,
editor, Handbook of Intelligent Vehicles, chapter 24. Springer, Novem-
ber 2011e.
Situational Awareness and Road Prediction
for Trajectory Control Applications

Christian Lundquist, Thomas B. Schön and Fredrik Gustafsson

Dept. of Electrical Engineering,


Linköping University,
SE–581 83 Linköping, Sweden
[email protected], [email protected], [email protected]

Abstract
This chapter is concerned with the problem of estimating a map of
the immediate surroundings of a vehicle as it moves. In order to com-
pute these map estimates sensor measurements from radars, lasers
and/or cameras are used together with the standard proprioceptive
sensors present in a car. Four different types of maps are discussed.
Feature based maps are represented by a set of salient features. Road
maps make use of the fact that roads are highly structured, since they
are built according to clearly specified road construction standards,
which allows relatively simple and powerful models of the road to
be employed. Location based maps, where occupancy grid maps be-
long and finally intensity based maps, which can be viewed as a con-
tinuous version of the location based maps. The aim is to provide a
self-contained presentation of how these maps can be built from mea-
surements. Real data from Swedish roads are used throughout the
chapter to illustrate the methods introduced.

1 Introduction
Most automotive original equipment manufacturers today offer longitudinal con-
trol systems, such as adaptive cruise control (acc) or collision mitigation systems.
Lateral control systems, such as lane keeping assistance (lka), emergency lane as-
sist (ela) (Eidehall et al., 2007) and curve speed warning, are currently developed
and released. These systems can roughly be split into safety applications, which
aim to mitigate vehicular collisions such as rear end or blind spot detection; and
comfort applications such as acc and lka, which aim at reducing the driver’s
work load. The overview article by Caveney (2010) describes the current devel-
opment of trajectory control systems. The requirements on the position accuracy
of the ego vehicle in relation to other vehicles, the road and the surrounding
environment increases with those control applications that are currently under
development and expected to be introduced to the market.

99
100 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

The systems available or in development today are based on two basic tracking
and decision principles: longitudinal systems use a radar, possibly supported
by a camera, to track leading vehicles and they decide on braking warnings or
interventions. On the other hand, lateral systems use a camera to track the lane
markers and they decide on steering warnings or interventions. Future safety and
comfort functions need more sophisticated situational awareness and decision
functionality:
• A combination of lateral and longitudinal awareness will be needed, where
all lanes are monitored, all of their vehicles are tracked, and the road-side
conditions are modeled to allow for emergency maneuvers. The result is a
situational awareness map, which is the topic for this chapter.
• This will allow for more sophisticated decision functionality. First, the pos-
sible evasive driver maneuvers are computed, and only if the driver has no
or very little time for evasive actions, the system will intervene. Second,
more complex automatic evasive maneuvers can be planned using the situ-
ational awareness map, including consecutive lateral and braking actions.
It should be remarked that the accuracy of the navigation systems today and in
the near future, see Chapters “In-car Navigation Basics" and “The evolution of
in-car navigation systems", are not of much assistance for situational awareness.
The reason is that satellite based navigation gives an accuracy of 10-20 meters,
which is not sufficient for lateral awareness. Even in future systems, including
reference base stations, enabling meter accuracy, the standard road maps will
limit the performance since they are not of sufficient accuracy. Thus, two leaps in
development are needed before positioning information and standard maps can
be used to improve situational awareness maps. Another technical enabler is car
to car communication (c2c), which may improve tracking of other vehicles and
in the end change the transportation system as has already been done with the
transponder systems for aircraft and commercial surface ships. Still, there will al-
ways be vehicles and obstacles without functioning communication systems. The
need for accurate situation awareness and road prediction to be able to automat-
ically position the car in a lane and derive drivable trajectories will evolve and
remain important.
The different types of situation awareness maps used to represent the environ-
ment are introduced in Section 2. Details of these maps are presented in Sec-
tions 3 to 6. The chapter is concluded in Section 7.

2 Modeling the Environment with a Map


The transportation system may be described and represented by a number of vari-
ables. These variables include state variables describing the position, orientation,
velocity and size of the vehicles. Here one can distinguish between the own vehi-
cle, the so called ego vehicle, and the other vehicles, referred to as the targets.
The state variable of the ego vehicle at time k is denoted xE,k . The trajectory of the
2 Modeling the Environment with a Map 101

ego vehicle is recorded in xE,1:k = {xE,1 , . . . , xE,k }, and it is assumed to be a priori


known in this work. This is a feasible assumption, since the absolute trajectory
in world coordinates and the relative position in the road network are separable
problems.

The state variable of the targets at time k is denoted xT,k . The road and the en-
vironment may be modeled by a map, which is represented by a set of variables
describing Nm landmarks in the environment according to
n o
Mk = m(1)k , m
(2)
k , . . . , m
(Nm )
k
. (1)
According to Thrun et al. (2005), there exists primarily two types of indexing for
probabilistic maps. In a feature based map each m(n) specifies the properties and
location of one object, whereas in a location based map the index n corresponds
to a location and m(n) is the property of that specific coordinate. The occupancy
grid map is a classical location based representation of a map, where each cell
of the grid is assigned a binary occupancy value that specifies if the location n is
occupied (m(n) = 1) or not (m(n) = 0), see e.g., Elfes (1987); Moravec (1988).

The ego vehicle perceives information about the other vehicles and the environ-
ment trough its sensors. The sensors provide a set of noisy measurements
n o
Zk = z(1) , z(2) , . . . , z(Nz,k ) (2)
k k k
at each discrete time instant k = 1, . . . , K. Common sensors used for automotive
navigation and mapping measure either range and bearing angle, as for exam-
ple radar and laser, or bearing and elevation angles, as for the case of a camera.
A signal preprocessing is always included in automotive radar sensors and the
sensor provides a list of detected features, defined by the range r, range rate ṙ
and bearing ψ. The preprocessing, the waveform design and the detection algo-
rithms of the radar is well described by e.g., Rohling and Möller (2008); Rohling
and Meinecke (2001). Laser sensors typically obtain one range measurement per
beam, and there exists both sensors which emit several beams at different angles
and those which have a rotating beam deflection system. They all have in com-
mon that the angles at which they measure range are quantized, thus providing
a list of range and bearings of which only the ones which are less than the max-
imum range shall be considered. Another commonly used automotive sensor is
the camera. The camera measurements are quantized and the data is represented
in a pixel matrix as an image.

Note that the indexing of sensor data is analogous to the representation of maps.
Each range and bearing measurement z(n) from a radar or laser specifies the prop-
erties and location of one observation, i.e., it is a feature based measurement.
However, the indexing of camera measurement is location based, since the index
n corresponds to a pixel in the image and z(n) is the property of that specific
coordinate.

The aim of all stochastic mapping algorithms, independent of indexing, is to esti-


102 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

mate the posterior density of the map


p(Mk |Z1:k , xE,1:k ), (3)
given all the measurements Z1:k from time 1 to k and the trajectory of the ego
vehicle xE,1:k . The conditioning on xE,1:k is implicit in this chapter, since it is as-
sumed to be a priori known. To be able to estimate the map, a relation between
the map and the measurements must first be found and modeled. This model h is
referred to as the measurement equation and for one combination of a measure-
(i) (j)
ment zk and a map variable mk it may be written according to
(i) (j)
zk = h(mk ) + ek , (4)
where ek is the measurement noise. The primary aim in this chapter is to cre-
ate a momentary map of the environment currently surrounding the ego vehicle.
Hence, it is just the present map data that is recorded in the vehicle. As soon as a
part of the environment if sufficiently far from the ego vehicle the corresponding
map entries are deleted. Environmental models must be compact, so that they
can be transmitted to and used efficiently by other automotive systems, such as
path planners. The maps must be adapted to the type of environment they aim to
model. For this reason four different map representations, which are relevant for
modeling the environment surrounding a vehicle, are described in the following
sections.

Feature based map The map is represented by a number of salient features in


the scene. Feature representation and tracking as part of a map is described
in Section 3.

Road map This is a special case of the feature based map, where the map vari-
ables model the geometry of the road. Roads are highly structured; they
are built according to road construction standards and contain primarily,
straight lines, curves and clothoids. Maps of road lanes and edges are de-
scribed in Section 4.

Location based map One of the most well established location based map is the
occupancy grid map, which is described in Section 5. The map is defined
over a continuous space, but discretized with a grid approximation.

Intensity based map The intensity density may be interpreted as the probability
that one object is located in an infinitesimal region of the state space. The
intensity based map is a continuous approximation of a location based map
and it is described in Section 6.

The estimated maps can be used to increase the localization accuracy of the ego
vehicle with respect to its local environment. Furthermore, the maps may be used
to derive a trajectory, which enables a collision free path of the vehicle.
3 Feature Based Map 103

3 Feature Based Map


Features corresponding to distinct objects in the physical world, such as tree
trunks, corners of buildings, lampposts and traffic signs are commonly denoted
landmarks. The procedure of extracting features reduces the computational com-
plexity of the system, as the features are on a more compact format than the
original measurement. The form of the measurement equations (4) depends on
the type of sensor used, and the signals measured by the sensor. In this section
we will briefly describe the use of features and the corresponding measurement
equations in both radar and laser sensors (Section 3.1) as well as cameras (Sec-
tion 3.2).

The feature based approach may together with existing road maps be used to sup-
plement the gps-based position of the vehicle. This approach is also commonly
referred to as visual odometry, see e.g., Nistér et al. (2006).

3.1 Radar and Laser


As mentioned in the introduction, radar and laser sensors measure at least range
and bearing of the landmark relative to the vehicles local coordinate, i.e., the
measurement vector is composed of
h iT
z(i) = r (i) ψ (i) . (5)
Notice that, for the sake of simplicity, the subscripts k specifying the time stamps
of the quantities is dropped throughout this chapter. The assumption will be
made that the measurements of the features are independent, i.e., the noise in
each individual measurement z(i) is independent of the noise in the other mea-
surements z(j) , for i , j. This assumption makes it possible to process one feature
at a time in the algorithms. Assume that the ego vehicle pose is defined by
h iT
xE = xE yE ψE , (6)
where xE , yE denote the horizontal position of the vehicle and ψE denotes the
heading angle of the vehicle. Furthermore, let us assume that one feature j in the
map is defined by its Cartesian coordinate,
(j) T
h i
m(j) = x(j)
m ym . (7)
The measurement model (4) is then written as
q 
(j) 2 (j) 2
(x + ) + (y + )
" (i) #   " #
r m x E m y E  e
 + r ,

=  (8)

(j)
ψ (i)  arctan ym −yE
−ψ  e ψ
(j) E 
xm +xE

where er and eψ are noise terms. The i th measurement feature corresponds to


the j th map feature. The data association problem arises when the correspon-
dence between the measurement feature and the map feature cannot be uniquely
identified. A correspondence variable, which describes the relation between mea-
104 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

surements and map features is introduced. This variable is also estimated at


each time step k and there exists a number of different algorithms to do this, see
e.g., Blackman and Popoli (1999). There are quite a few different possibilities on
how to define features when radars and lasers are used.

3.2 Cameras and Computer Vision


Features form the bases in many computer vision algorithms, especially when
it comes to building maps. There exists a myriad of feature detectors, which
extract edges, corners or other distinct patterns. Some of the most well know
are the Harris corner detector (Harris and Stephens, 1988), SIFT (Lowe, 2004)
and MSER (Matas et al., 2004), see Figure 1 for an example, where the Harris
corner detector is used. For a more complete account of various features used,
see e.g., Szeliski (2010). Using features to build maps from camera images has
been studied for a long time and a good account of this is provided by Davison
et al. (2007).
A key component in building maps using features is a good mathematical de-
scription of how the features detected in the image plane are related to the cor-
responding positions in world coordinates. The distance (commonly referred to
as the depth) to a landmark cannot be determined from a single image and this
fact should be encoded by the mathematical parameterization of the landmark.
The so called inverse depth parameterization by Civera et al. (2008) provides an
elegant uncertainty description of the fact that the depth (i.e., distance) to the
landmark is unknown. Here, the landmark (lm) state vector is given by
 (j)  
 c   camera position first time lm was seen 

ψ   azimuth angle of lm seen from c(j) 
 (j)   
m(j) =  (j)  =  (9a)
φ   elevation angle of lm seen from c(j) 


inverse distance (depth) from c(j) to lm


 (j)   
ρ
h iT
c(j) = x(j) y(j) z(j) , (9b)

where c(j) is the position of the camera expressed in world coordinates at the
time when landmark j was first seen, ψ (j) is the azimuth angle of the landmark
as seen from c(j) , relative to the world coordinate frame. The elevation angle of
the landmark as seen from c(j) , relative to world coordinate frame directions is
denoted φ(j) , and the inverse depth, which is the inverse of the distance, from c(j)
to the landmark is denoted ρ(j) .

The landmark state m(j) is a parametrization of the Cartesian position p(j) of


landmark j, see Figure 2. The relationship between the position of the landmark
and the inverse depth state representation is given by
cos φ(j) cos ψ (j) 
 
(j) 1
p(j) = ck + (j)  cos φ(j) sin ψ (j)  . (10)
 
ρ 
sin φ(j)

| {z }
q(j)
3 Feature Based Map 105

19
20 16

3
24

21
15

(a)

10

5
15
0 20
21
y [m]

−5 19
16 3
−10

−15 24

−20
0 10 20 30 40 50 60 70 80 90 100
x [m]

(b)

10
1920
5 16
z [m]

3
24
0 15 21

−5
0 10 20 30 40 50 60 70 80 90 100
x [m]

(c)

Figure 1: Features detected using the Harris corner detector are shown in
Figure (a). Figure (b) and (c) shows the estimated position of the landmarks
in the x − y and x − z plane, respectively.

The measurement model (4) for the landmarks is given by


 C,(j) 
(j) C,(j) 1 yp 
h(m ) = Pn (p ) = C,(j)  C,(j)  , (11)
xp zp

where Pn (pC,(j) ) is used to denote the normalized pinhole projection and pC,(j) =
106 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

y
z
x
1 q c̄k
d= ρ c̄k − c

c y
x W
z
Figure 2: The inverse depth parameterization used for the landmarks. The
position of the landmark p is parameterized using the position c of the cam-
era the first time the feature was seen, the direction q(φ, ψ) and the inverse
depth ρ. The position of the camera at time step k is denoted c̄k .

C,(j) T
h i
C,(j) C,(j)
xp yp zp denotes the position of feature j at time k in the camera
coordinate frame C. Note that before an image position can be used as a mea-
surement together with the measurement equation (11), the image position is
adjusted according to the camera specific parameters, such h as ifocal
T
length, pixel
sizes, etc. The transformation
h iT between pixel coordinates u v and normalized
camera coordinates y z , which is the kind of coordinates landmark measure-
ments z(i) (see (11)) are given in, is
" #  u−uic 
y  f 
=  u ic  , (12)
z  v−v 
fv
h iT
where uic vic denotes the image center, and fu and fv are the focal lengths
(given in pixels) in the lateral u-direction and the vertical v-direction, respec-
tively. The transformation between world and camera coordinates is given by
" #! ! !
C,(j) CE EW (j) x 1 (j) E
p =R R c − E + (j) q −c , (13a)
yE ρ
RCE = R(ψ C , φC , γ C ), (13b)
EW
R = R(ψE , φE , γE ), (13c)
where cE denotes the position of the camera expressed in the ego vehicle body
coordinate frame. The rotation matrix R(α, β, γ) transforms coordinates from
coordinate frame B to coordinate frame A, where the orientation of B relative to
A is ψ (yaw), φ (pitch) and γ (roll). Furthermore, ψ C , φC and γ C are the constant
4 Road Map 107

yaw, pitch and roll angles of the camera, relative to the vehicle body coordinate
frame.

The landmarks are estimated recursively using e.g., a Kalman filter. An example
of estimated landmarks is shown in Figure 1. The estimated position p(j) for
seven landmarks is shown in the image plane, as well as in the world x − y and
x − z plane, where the ego vehicle is in origin.

4 Road Map
A road map describe the shape of the road. The roads are mainly modeled using
polynomial functions which describe the lane and the road edges. The advan-
tages of road models are that they require sparse memory and are still very accu-
rate, since they do not suffer from discretization problems. General road models
are presented in Section 4.1. Lane estimation using camera measurements is de-
scribed in Section 4.2 and finally road edge estimation based on feature measure-
ments is described in Section 4.3.

4.1 Road Model


The road, as a construction created by humans, possesses no dynamics; it is a
static time invariant object in the world coordinate frame. The building of roads
is subject to road construction standards, hence, the modeling of roads is geared
to these specifications. However, if the road is described in the ego vehicle’s coor-
dinate frame and the vehicle is moving along the road it is possible and indeed
useful to describe the characteristics of the road using time varying state vari-
ables.

A road consists of straight and curved segments with constant radius and of vary-
ing length. The sections are connected through transition curves, so that the
driver can use smooth and constant steering wheel movements instead of step-
wise changes when passing through road segments. More specifically, this means
that a transition curve is formed as a clothoid, whose curvature c changes linearly
with its curve length xc according to
c(xc ) = c0 + c1 · xc . (14)
Note that the curvature c is the inverse of the radius. Now, suppose xc is fixed
to the ego vehicle, i.e., xc = 0 at the position of the ego vehicle. When driving
along the road and passing through different road segments c0 and c1 will not be
constant, but rather time varying state variables
" # " #
c0 curvature at the ego vehicle
m= = . (15)
c1 curvature derivative
In section 3 the map features were expressed in a fixed world coordinate frame.
However, note that in this section the road map is expressed as seen from the mov-
ing ego vehicle. Using (14), a change in curvature at the position of the vehicle is
108 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

given by
dc(xc ) dc0 dxc
= ċ0 = · = c1 · v, (16)
dt xc =0 dxc dt
where v is the ego vehicle’s velocity. Furthermore, the process model is given by
" # " #" # " #
ċ0 0 vx c0 0
= + . (17)
ċ1 0 0 c1 wc1
This model is referred to as the simple clothoid model and it is driven by the
process noise wc1 . Note that the road is modeled in a road aligned coordinate
frame, with the components (xc , yc ), and the origin at the position of the ego
vehicle. There are several advantages of using road aligned coordinate frames,
especially when it comes to the process models of the other vehicles on the same
road, these models are greatly simplified in road aligned coordinates. However,
the flexibility of the process model is reduced and basic dynamic relations such
as Newton’s and Euler’s laws cannot be directly applied. The road model (14) is
transformed into Cartesian coordinates (x, y) using
Zxc
x(xc ) = cos (χ(x))dx ≈ xc , (18a)
0
Zxc
c0 2 c1 3
y(xc ) = sin (χ(x))dx ≈ x + xc , (18b)
2 c 6
0
where the heading angle χ is defined as
Zx
c1 2
χ(x) = c(λ)dλ = c0 x + x . (18c)
2
0
The origin of the two frames is fixed to the ego vehicle, hence, integration con-
stants (x0 , y0 ) are omitted.

A problem appears when two or more clothoid segments, with different param-
eters c0 and c1 , are observed in the same camera view. The parameter c0 will
change continuously during driving, whereas c1 will be constant in each segment
and change stepwise at the segment transition. This leads to a dirac impulse in
ċ1 at the transition. The problem can be solved by assuming a high process noise
wc1 in (17), but this leads to less precise estimates of the state variables when no
segment transitions occur in the camera view. To solve this problem an averag-
ing curvature model was proposed by Dickmanns (1988), which is perhaps best
described with an example. Assume that the ego vehicle is driving on a straight
road (i.e., c0 = c1 = 0) and that the look ahead distance of the camera is x̄c . A new
segment begins at the position x0c < x̄c , which means that there is a step in c1 , and
c0 is ramped up, see Figure 3. The penetration into the next segment is lc = x̄c − x0c .
The idea of this model, referred to as averaging or spread-out dynamic curvature
model , with the new state variables c0m and c1m , is that it generates the true
4 Road Map 109

c1 real road

xc

c0 real road

xc

y real road

model

y(ȳc )
x

lc
x0c x̄c

Figure 3: A straight and a curved road segment are modeled with the aver-
aging road model. The two upper plots show the parameters c1 and c0 of
the real road, the bottom plot shows the real and the modeled roads in a
Cartesian coordinate frame.

lateral offset y(x̄c ) at the look ahead distance x̄c , i.e.,


yreal (x̄c ) = ymodel (x̄c ), (19)
but it is continuously spread out in the range (0, x̄c ). The lateral offset of the real
road as a function of the penetration lc , for 0 ≤ lc ≤ x̄c , is
c
yreal (lc ) = 1 lc3 , (20)
6
since the first segment is straight. The lateral offset of the averaging model as a
function of the penetration lc is
c0m (lc ) 2 c1m (lc ) 3
ymodel (lc ) = x̄c + x̄c , (21)
2 6
at the look ahead distance x̄c . The equation
lc3
c1 = 3c0m (lc ) + c1m (lc )x̄c , (22)
x̄2c
110 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

is obtained by inserting (20) and (21) into (19). By differentiating (22) with re-
dc0m (lc ) d( · ) d( · ) dt
spect to lc and using the relations dc 1
dlc = 0, dlc = c1m (lc ) and dl = dt · dl
c c
the following equation is obtained
v
ċ1m = 3 (c1 (lc / x̄c )2 − c1m ), (23)
x̄c
for lc < x̄c . Since (lc / x̄c )2 is unknown it is usually set to 1 (Dickmanns, 2007),
which finally yields
v
ċ1m = 3 (c1 − c1m ). (24)
x̄c
The state variable vector of the averaging model is defined as
c0m   curvature at the ego vehicle
   

m = c1m  =  averaged curvature derivative  , (25)
   
c1 c derivative of the foremost segment
   

and the process model is given by augmenting the simple clothoid model (17)
with (24) according to
  
ċ0m  0 v 0  c0m   0 
   
ċ1m  = 0 −3 x̄v 3 x̄v  c1m  +  0  .
   
(26)

   c c   
ċ1  c wc1
  
0 0 0 1

The model is driven by the process noise wc1 , which also influences the other
states. The averaging model is well described in the recent textbook Dickmanns
(2007) and some early results using the model are presented by e.g., Dickmanns
and Mysliwetz (1992).

4.2 Mapping of the Road Lanes


The problem of mapping road lanes or lane estimation as it is often called, is
a curve estimation problem. The task is to obtain the best possible estimate of
the curve describing the lane by exploiting the measurements provided by the
onboard sensors. The most important sensor type here is exteroceptive sensors,
such as for example cameras and lasers. Currently the camera is the most com-
monly used sensor for the lane estimation problem and in this section we will
show how camera images can be used to obtain lane estimates.
The lane estimation problem is by no means a new problem, it has been studied
for more than 25 years, see e.g., Waxman et al. (1987); Dickmanns and Mysliwetz
(1992) for some early work and Wang et al. (2008) for more recent contributions.
A complete overview of what has been done on this problem is available in the
survey paper McCall and Trivedi (2006). In this section the problem is broken
down into its constituents and one way of solving the lane estimation problem
when a camera is used as a sensor is shown.
The lane estimation problem can be separated into two subproblems, commonly
referred to as the lane detection problem and the lane tracking problem. As the
name reveals, the lane detection problem deals with detecting lanes in an image.
4 Road Map 111

The lane tracking problem then makes use of the detected lanes together with
information about the temporal and the spatial dependencies over time in order
to compute lane estimates. These dependencies are mathematically described us-
ing a road model. Traditionally, lane tracking is done using an extended Kalman
filter, see e.g., Dickmanns and Mysliwetz (1992); Guiducci (1999). There are also
interesting approaches based on the particle filter (pf) by Gordon et al. (1993)
available, see e.g., Zhou et al. (2006); Wang et al. (2008); Kim (2008).

The lane is here modeled in the image plane (cf. Section 3.2) as a linear function
close to the ego vehicle and as a quadratic function far away, i.e.,

a + b(v − vs ) v > vs


lθ (v) =  (27a)
a + b(v − vs ) + c(v − vs )2 v ≤ vs ,

where vs denotes the (known) vertical separation in pixels between the linear and
the quadratic model (illustrated by the horizontal line in Figure 4) and subindex
θ is used to denote the dependence on the parameters
h iT
θ= a b c , (27b)
which are to be estimates. These parameters all have geometrical interpretations
in terms of offset (a), local orientation (b) and curvature (c) of the lane in the im-
age plane. The lane estimates (here, the estimates of the parameters θ in (27))
carry important information about the states in the road model introduced in
Section 4.1, which are expressed in world coordinates (in contrast to pixel coordi-
nates u, v). These road model states are typically what we are interested in and
we will return to this important connection at the end of this section.

Given the fact that the problem of lane detection has been studied for more than
25 year there are many ideas on how to solve this problem available. Rather than
trying to give a complete account of all the different methods available, we will
here be very specific and explain one way in which lanes can be detected and
show some results on real sequences. The solution presented here is very much
along the lines of Jung and Kelber (2005); Lee (2002).

The initial lane detection is performed using a linear lane model, which is found
from the image using a combination of the edge distribution function (EDF) (Lee,
2002) and the Hough transform (Hough, 1962; Duda and Hart, 1972). The EDF
is defined as the gradient orientation
!
Du
ϕ(u, v) = arctan , (28)
Dv
where Du and Dv are approximations of the gradient function
∂I T ≈ D
h i h iT
∂I
∇I(u, v) = ∂u ∂v u Dv (29)

for the gray scale image I(u, v). The two largest peaks (α l , α r ,) of ϕ(u, v) provide
the most probable orientations of the lanes in the image. This is used to form an
112 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

edge image g(u, v) as



|∇I(u, v)| ≈ |Du | + |Dv |,

 |ϕ(u, v) − α l | < Tα or |ϕ(u, v) − α r | < Tα
g(u, v) =  (30)
0,
 otherwise
where Tα is a threshold, here Tα = 2◦ . Applying the Hough transform to the
edge image g(u, v) provides two initial linear models l(v) = a + bv, one for the
left lane markings and one for the right lane markings. These models are used
to form a region which will serve as the search region in the subsequent image.
This region, which we refer to as the lane boundary region of interest (LBROI) is
simply defined by extending the linear model w pixels to the right and w pixels
to the left (here w = 10).

Given that an initial LBROI is found, the task is now to make use of this informa-
tion in order to compute estimates of the parameters in the lane model (27),
h iT h iT h iT
θ = (θ l )T (θ r )T , where θ l = al bl cl , θ r = ar br cr , (31)
where superscript l and r have been used to indicate the left lane marking and the
right lane marking, respectively. Estimates of these parameters θ are obtained by
solving a constrained weighted least squares problem for each image. The cost
function is given by
N 
X 
V (θ) = Mil (uli − lθ l (vli ))2 + (Mir (uri − lθ r (vri ))2 , (32)
i=1
where N denotes the number of relevant pixels in the lateral u direction, l denotes
the lane model given in (27) and Mi denotes the magnitude in the thresholded
edge-image g2 (u, v), Mi = g2 (ui , vi ), defined as

1
g(u, v), g(u, v) ≥ 2 Mmean


g2 (u, v) =  (33)
0,
 otherwise
where Mmean denotes the mean magnitude of the g(u, v).

Constraints are introduced in order to account for the fact that the right lane and
the left lane are related to each other. This is modelled according to the following
linear (in θ) inequality constraint
ar − al + (br − bl )(v1 − vm ) + (cr − cl )(v1 − vm )2 ≤ δ, (34)
which states that the left and the right lanes cannot be more than δ pixels apart
furthest away (at v1 ) from the host vehicle. In other words, (34) encodes the fact
that the left and the right lanes must have similar geometry in the sense that the
quadratic parts in (27) are strongly related.

From (32)–(34) it is now clear that lane estimation boils down to a curve esti-
mation problem, which here is quadratic in the unknown parameters θ. More
specifically, inserting the lane model (27) into the cost function (32) and writ-
ing the problem on matrix form results in a constrained weighted least squares
Time: 5 s
4 Road Map 113

Time: 19 s
(a)

(b)

Figure 4: Lane estimation results (in red) overlayed onto the camera image.
From this figure the lane model (27) is clearly illustrated, the model is linear
for v > vs and quadratic for v ≤ vs . The method assumes that the road surface
is flat and when this assumption is true the results are good, see Figure (a).
However, when this assumption does not hold the estimates are not that good
on a longer horizon, see Figure (b).

problem on the form


1 T
min 2θ Hθ + f Tθ
θ (35)
s.t. Lθ ≤ δ.
This is a quadratic program (QP) implying that a global minimum will be found
and there are very efficient solvers available for this type of problems. Here, we
have made use of a dual active set method1 according to Gill et al. (1991). An
illustration of the lane estimation results is provided in Figure 4. The estimate of
θ is then used to form the LBROI for the new image, simply as region defined by
lθ (v) ± w for each lane.

1 The QP code was provided by Dr. Adrian Wills at the University of Newcastle, Australia, see
http://sigpromu.org/quadprog. This code implements the method described by Goldfarb and Idnani
(1983); Powell (1985).
114 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

The lane estimates that are now obtained as the solution to (35) can be expressed
in world coordinates, seen from the ego vehicle, using geometrical transformation
along the lines of what has already been described in Section 3. These transfor-
mation are discussed in detail in Guiducci (2000). Once the lane estimates are
available in the world coordinates they can be used as camera measurements in a
sensor fusion framework to make a very important contribution to the estimate of
map variables m (i.e., (15) or (24)) in the road model (perhaps most importantly
the curvature c0 and the curvature derivative c1 ) as it is derived in Section 4.1.

4.3 Mapping of the Road Edges


Feature based measurements of landmarks along the road may be used to map
the road edges. This section describes a method to track line shaped objects, such
as guardrails using point measurements from radar, laser or extracted camera
features. Tracking point objects, was covered in Section 3 and is not repeated
here. The line shaped and curved guardrails are described using the polynomial
road model (18) and tracked as extended targets in a Cartesian frame. However,
to allow a more general treatment of the problem in this section the extended
targets are modeled using nth order polynomials given as
y = a0 + a1 x + a2 x2 + . . . + an xn , (36)
h iT
in the range [xstart , xend ] where ma , a0 a1 · · · an are the polynomial coeffi-
h iT
cients and x y are planar Cartesian coordinates. Note that the coordinate y is
a function of x and that the direction of the coordinate frame is chosen dependent
on the application in mind. The state vector of a map object j is defined as
h iT
m(j) , (mja )T xstart
j j
xend . (37)
The map M is modeled by a set of such polynomial shapes according to (1).
Suppose the 2-dimensional noisy feature based sensor measurements are given
in batches of Cartesian x and y coordinates as follows

(i)
h iT Nz,k
zk , x(i) y(i) , (38)
k i=1

for discrete time instants k = 1, . . . , K. In many cases in reality (e.g., radar, laser
and stereo vision cf. (5)) and in the practical application considered in this section,
the sensor provides range r and azimuth angle ψ given as,

(i)
h iT Nz,k
z̄k , r (i) ψ (i) . (39)
k i=1

In such a case some suitable standard polar to Cartesian conversion algorithm is


used to convert these measurements into the form (38).
The state model considered in this section is described, in general, by the state
space equations
mk+1 = f (mk , uk ) + wk , (40a)
4 Road Map 115

yk = h(mk , uk ) + ek , (40b)
where m, u and y denote the state, the input signal, and the output signal, while
w ∼ N (0, Q) and e ∼ N (0, R) are the process and measurement noise, respec-
tively. The use of an input signal u is important in this framework. For the sake
of simplicity, the tracked objects are assumed stationary, resulting in very simple
motion models (40a).

A polynomial is generally difficult to handle in a filter, since the noisy measure-


ments are distributed arbitrarily along the polynomial. In this respect, the mea-
surement models considered contain parts of the actual measurement vector as
parameters. The methodology takes into account the errors caused by using the
actual noisy measurements as model parameters. This scheme is an example of
the so called “errors in variables” (eiv) framework, see e.g., Söderström (2007);
Diversi et al. (2005); Björck (1996).

The general convention in modeling is to make the definitions


y , z, u , ∅, (41)
where ∅ denotes the empty set meaning that there is no input. Notice that the
subscripts k, specifying the time stamps of the quantities, is omitted for the sake
of simplicity, In this setting, it is extremely difficult, if not impossible, to find
a measurement model connecting the outputs y to the states ma in the form of
(40b). Therefore, other selections for y and u, need to be used. Here, the selection
y , y, u , x. (42)
is made. Although being quite a simple selection, this choice results in a rather
convenient linear measurement model in the state partition ma ,
y = Ha (u)ma + e, (43)
h iT
where Ha (u) = 1 x x2 · · · xn . It is the selection in (42) rather than (41)
that allows to use the standard methods in target tracking with clever modifica-
tions. Such a selection as (42) is also in accordance with the eiv representations
where measurement noise are present in both the outputs and inputs, i.e.,, the
observation z can be partitioned according to
" #
u
z= . (44)
y
The measurement vector given in (38) is expressed in terms of a noise free vari-
able z0 which is corrupted by additive measurement noise z̃ according to
z = z0 + z̃, z̃ ∼ N (0, Σc ), (45)
where the covariance Σc can be decomposed as
" #
Σx Σxy
Σc = . (46)
Σxy Σy
Note that, in the case the sensor provides measurements only in polar coordinates
116 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

(39), one has to convert both the measurement z̄ and the measurement noise co-
variance
Σp = diag(σd2 , σδ2 ) (47)
into Cartesian coordinates. This is a rather standard procedure. Note that, in
such a case, the resulting Cartesian measurement covariance Σc is, in general,
not necessarily diagonal and hence Σxy of (46) might be non-zero.
Since the model (43) is linear, the Kalman filter measurement update formulas
can be used to incorporate the information in z into the extended source state
ma . An important question in this regard is what would be the measurement
covariance of the measurement noise term e in (43).
Neglecting the errors in the model parameters Ha (u) can cause overconfidence
in the estimates of recursive filters and can actually make data association dif-
ficult in tracking applications (by causing too small gates). A simple methodol-
ogy is used to take the uncertainties in Ha (u) into account in line with the eiv
framework. Assuming that the elements of the noise free quantity z0 satisfy the
polynomial equation exactly according to
y − ỹ = Ha (u − ũ)ma , (48a)
h i
y − ỹ = 1 x − x̃ (x − x̃)2 ··· (x − x̃)n ma , (48b)
which can be approximated using a first order Taylor expansion resulting in
y ≈ Ha (u)ma − Hea (u)x̃ma + ỹ (49a)
" #

= Ha (u)ma + h̃a (ma , u) , (49b)

with
h i
Ha (u) = 1 x x2 · · · xn , (49c)
h i
ea (u) = 0 1 2x · · · nxn−1 ,
H (49d)
h i
h̃a (ma , u) = −a1 − 2a2 x − · · · − nan xn−1 1 . (49e)
Hence, the noise term e of (43) is given by
" #

e = ỹ − Ha x̃ma = h̃(ma , u)
e (50)

and its covariance is given by
Σa = E(eeT ) = Σy + ma H eaT mTa − 2H
ea Σx H ea Σxy
= h̃(ma , u)Σc h̃T (ma , u). (51)
Note that the eiv covariance Σa depends on the state variable ma , which is sub-
stituted by its last estimate in recursive estimation.
Up to this point, only the relation of the observation z to the state component ma
has been considered. It remains to discuss the relation between the observation
4 Road Map 117

and the start xstart and the end points xend of the polynomial. The measurement
information must only be used to update these components of the state if the new
observations of the extended source lie outside the range of the polynomial. The
following (measurement dependent) measurement matrix can be defined for this
purpose:
h i


 1 0 if x ≤ xstart,k|k−1
h
 i
Hse = 
 0 1 if x ≥ xend,k|k−1 (52)
h i
 0 0 otherwise.

The complete measurement model of an extended object can now be summarized


by
z = Hm + e, e ∼ N (0, R(m)), (53a)
with
" #
01×n Hse
H= , (53b)
Ha 01×2
R(m) = blkdiag(Σx , Σa (m)). (53c)
Put in words, if the x-component of a new measurement is closer to the sensor
than the start point of the line xstart it is considered in the measurement equa-
tion (52) and can used to update this state variable. Analogously, if a new mea-
surement is more distant than the end point of the line xend it is considered in (52).
Further, if a measurements is in between the start and end point of the line, the
measurement model is zero in (52) and there is no relation between this measure-
ment and the state variables xstart or xend .

Any model as e.g., the standard constant velocity or the coordinated turn model
may be used for the targets. For simplicity it is assumed that the targets are
stationary in this contribution, thus the process model on the form (40a) is linear
and may be written
mk+1 = Fmk + wk . (54)
To increase the flexibility of the extended object an assumption about the dy-
namic behavior of its size is made. The size of the extended object is modeled to
shrink with a factor 0.9 < λ < 1 according to
xstart,k+1 = xstart,k + λ(xend,k − xstart,k ), (55a)
xend,k+1 = xend,k − λ(xend,k − xstart,k ), (55b)
leading to the following process model for the polynomial
 n×n
0n×2

I 
F =  2×n 1 − λ λ  . (56)
 
0
λ 1−λ

This shrinking behavior for the polynomials allows for automatic adjustment of
the start and end points of the polynomials according to the incoming measure-
118 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

ments.

The association of measurements to state estimates is treated in Lundquist et al.


(2011b), where a generalized nearest neighbor method is applied.

The section is concluded with some results based on the information given by an
ordinary automotive acc radar, for the traffic situation shown in Figure 5a. The
ego vehicle, indicated by a green circle, is situated at the (0, 0)-position in Fig-
ure 5b, and the red dots are the radar reflections, or stationary observations, at
one time sample. The smaller magenta colored dots are former radar reflections,
obtained at earlier time samples. Figure 5c shows the estimated points and lines
for the same scenario using the kf eiv method presented in this contribution.
The mean values of the states are indicated by solid black lines or blue points.
Furthermore, the state variance, by means of the 90% confidence interval, is illus-
trated by gray lines or cyan colored ellipses, respectively. The estimate of the lane
markings (18), illustrated by the gray dashed lines and derived according to the
method presented in Lundquist and Schön (2010), is shown here as a comparison.
Tracked vehicle in front of the ego vehicle are illustrated by blue squares.

5 Occupancy Grid Map

An occupancy grid map is defined over a continuous space and it can be dis-
cretized with, e.g., a grid approximation. The size of the map can be reduced to
a certain area surrounding the ego vehicle. In order to keep a constant map size
while the vehicle is moving, some parts of the map are thrown away and new
parts are initiated. Occupancy grid mapping (ogm) is one method for tackling
the problem of generating consistent maps from noisy and uncertain data under
the assumption that the ego vehicle pose, i.e., position and heading, is known.
These maps are very popular in the robotics community, especially for all sorts of
autonomous vehicles equipped with laser scanners. Indeed several of the darpa
urban challenge vehicles used ogm’s, see Buehler et al. (2008). This is because
they are easy to acquire, and they capture important information for navigation.
The ogm was introduced by Elfes (1987) and an early introduction is given by
Moravec (1988). To the best of the author’s knowledge Borenstein and Koren
(1991) were the first to utilize ogm for collision avoidance. Examples of ogm in
automotive applications are given in Vu et al. (2007). A solid treatment can be
found in the recent textbook by Thrun et al. (2005).

This section begins with a brief introduction to occupancy grid maps, according
to Thrun et al. (2005). Using this theory and a sensor with high resolution usually
gives a nice looking bird eye’s view map. However, since a standard automotive
radar is used, producing only a few range and bearing measurements at every
time sample, some modifications are introduced as described in the following
sections.
5 Occupancy Grid Map 119

(a)

80 80

70 70

60 60

50 50

40 40
xE [m]

xE [m]

30 30

20 20

10 10

0 0

−10 −10

−20 −20
−20 −10 0 10 20 −20 −10 0 10 20
E
y [m] yE [m]

(b) (c)

Figure 5: A traffic situation is shown in Figure (a). Figure (b) shows the
radar measurements, and Figure (c) the resulting tracked points and lines.
The circle in the origin is the ego vehicle, the square is the tracked vehicle in
front and the dashed gray lines illustrate the tracked road curvature.
120 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

5.1 Background
The planar map M is defined in the world coordinate frame W and is represented
by a matrix. An occupancy grid map is partitioned into finitely many grid cells
N
M = {m(j) }j=1
m
. (57)

The probability of a cell being occupied p(m(j) ) is specified by a number ranging


from 1 for occupied to 0 for free. The notation p(m(j) ) will be used to refer to the
probability that a grid cell is occupied. A disadvantage with this design is that it
does not allow for dependencies between neighboring cells.

The occupancy grid map was originally developed to primarily be used with mea-
surements from a laser scanner. A laser is often mounted on a rotating shaft and
it generates a range measurement for every angular step of the mechanical shaft,
i.e. a bearing angle. This means that the continuously rotating shaft produces
many range and bearing measurements during every cycle. The ogm algorithms
transform the polar coordinates of the measurements into Cartesian coordinates
in a fixed world or map frame. After completing one mechanical measurement
cycle the sensor provides the measurements for use.

The algorithm loops through all cells and increases the occupancy probability
(i)
p(m(j) ) if the cell was occupied according to the measurement zk . Otherwise the
occupancy value either remains unchanged or is decreased, depending on if the
range to the cell is greater or less than the measured range. The latter implies
that the laser beam did pass this cell without observing any obstacles. If the
measured range is too large or the cell size is too small, it might be necessary
to consider the angular spread of the laser beam and increase or decrease the
occupancy probability of several cells with respect to the beam width.

The map is assumed to be static, i.e., it does not change during sensing. In this
section the map estimation problems is solved with a binary Bayes filter, of which
ogm is one example. In this case the estimation problem is solved with the binary
Bayes filter, of which ogm is one example. the state can either be free m(j) = 0
or occupied m(j) = 1. A standard technique to avoid numerical instabilities for
probabilities close to 0 and to avoid truncation problems close to 0 and 1 is to use
the log odds representation of occupancy
p(m(j) |Z1:k , xE,1:k )
`j,k = log , (58)
1 − p(m(j) |Z1:k , xE,1:k )
or put in words, the odds of a state is defined as the ratio of the probability
of this event p(m(j) |Z1:k , xE,1:k ) divided by the probability of its complement 1 −
p(m(j) |Z1:k , xE,1:k ). The probabilities are easily recovered using
1
p(m(j) |Z1:k , xE,1:k ) = 1 − . (59)
1 + exp `j,k
Note that the filter uses the inverse measurement model p(m|z, x). Using Bayes’
5 Occupancy Grid Map 121

rule it can be shown that the binary Bayes filter in log odds form is
p(m(j) |Zk , xE,k ) p(m(j) )
`j,k = `j,k−1 + log − log , (60)
1 − p(m(j) |Zk , xE,k ) 1 − p(m(j) )
where p(m(j) ) represents the prior probability. The log odds ratio of the prior
before processing any measurements is defined as
p(m(j) )
`j,0 = log . (61)
1 − p(m(j) )
Typically p(m(j) ) = 0.5 is assumed, since before having measurements nothing is
known about the surrounding environment. This value yields `0 = 0.

5.2 OGM with Radar Measurements


The radar system provides range and bearing measurements for observed targets
at every measurement cycle. The main difference to a laser is that there is not one
range measurement for every angular position of the moving sensor. The num-
ber of observations depends on the environment. In general there are much fever
observations compared to a laser sensor. There is also a limit (usually around
32 − 64) on the number of objects transmitted by the radar equipment on the can-
bus, and a proprietary selection is perform in the radar. Moving objects, which
are distinguished by measurements of the Doppler shift, are prioritized and more
likely to be transmitted than stationary objects. Furthermore, it is assumed that
the opening angle of the radar beam is small compared to the grid cell size. With
these the ogm algorithm is changed to loop through the measurements instead
of the cells, in order to decrease the computational load. A radar’s angular uncer-
tainty is usually larger than its range uncertainty. When transforming the polar
coordinates of the radar measurements into the Cartesian coordinates of the map,
the uncertainties can either be transformed in the same manner or it can simply
be assumed that the uncertainty increases with the range.

5.3 Experiments and Results


Figure 6a shows an ogm example of a highway situation. The ego vehicle’s cam-
era view is shown in Figure 6c. The size of the ogm is 401 × 401 m, with the ego
vehicle in the middle cell. Each cell represents a 1×1 m square. The gray-level
in the occupancy map indicates the probability of occupancy p(M|Z1:k , xE,1:k ), the
darker the grid cell, the more likely it is to be occupied. The map shows all major
structural elements as they are visible at the height of the radar. This is a problem
if the road is undulated and especially if the radar observes obstacles over and
behind the guardrail. In this case the occupancy probability of a cell might be
decreased even though it was previously believed to be occupied, since the cell
is between the ego vehicle and the new observation. The impact of this problem
can be reduced by tuning the filter well.
It is clearly visible in Figure 6a that the left border is sharper than the right. The
only obstacle on the left side is the guardrail, which gives rise to the sharp edge,
122 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

110
50
120

100 130

140
150
150

200 160

170
250
180

300 190

200
350
210

400 220
50 100 150 200 250 300 350 400 180 200 220
(a) (b)

(c)

Figure 6: The filled circle at position (201, 201) in the occupancy grid map
in Figure (a) is the ego vehicle, the + are the radar observations obtained
at this time sample, the black squares are the two leading vehicles that are
currently tracked. Figure (b) shows a zoom of the ogm in front of the ego
vehicle. The gray-level in the figure indicates the probability of occupancy,
the darker the grid cell, the more likely it is to be occupied. The shape of the
road is given as solid and dashed lines, calculated as described in Section 4.
The camera view from the ego vehicle is shown in Figure (c), the concrete
walls, the guardrail and the pillar of the bridge are interesting landmarks.
Furthermore, the two tracked leading vehicles are clearly visible in the right
lane.
6 Intensity Based Map 123

whereas on the right side there are several obstacles behind the guardrail, which
also cause reflections, e.g., noise barrier and vegetation. A closer look in Figure 6b
reveals that there is no black line of occupied cells representing the guardrail as
expected. Instead there is a region with mixed probability of occupancy and after
about 5 m the gray region with initial valued cells tell us that nothing is known
about these cells.

6 Intensity Based Map


The bin-occupancy filter, which is described in Erdinc et al. (2009), aims at esti-
mating the probability of a target being in a given point. The approach is derived
via a discretized state-space model of the surveillance region, where each grid
cell (denoted bin in this approach) can or may not contain a target. One of the
important assumptions is that the bins are sufficiently small so that each bin is
occupied by maximum one target. In the limiting case, when the volume of the
bins |ν| tends to zero, it is possible to define the bin-occupancy density
(j)
Pr(mk = 1|Z1:k )
Dk|k , lim , (62)
|ν|→0 |ν|
(j)
where Pr(mk = 1|Z1:k ) is the probability that bin j is occupied by one target.
The continuous form of the bin-occupancy filter prediction and update equations
are the same as the probability hypothesis density (phd) filter equations (Erdinc
et al., 2009). Furthermore, the phd is the first moment density or intensity den-
sity in point process theory, see e.g., Mahler (2007), and a physical interpretation
is given in Daley and Vere-Jones (2003) as the probability that one target is lo-
cated in the infinitesimal region (x,x + dx) of the state space, divided by dx. The
continuous form of the physical bin model leads us to a continuous location based
map which we denote intensity-based map, and intend to estimate with the phd
filter.
The bin occupancy filter or the phd filter was developed for target tracking of
point sources, however the aim in this section is to create a probabilistic location
based map of the surroundings of a moving vehicle. One of the main differences
between standard target tracking problems and the building of a location based
map, is that many objects such as, guardrails or walls, are typically not point tar-
gets, but extended targets (Mahler, 2007; Gilholm and Salmond, 2005). Further-
more, there is no interest in estimating the number of objects in the map, and
there is also no interest in keeping track of specific objects. Nevertheless, the bin-
occupancy filter attempts to answer the important question: "Is there an object
(target) at a given point?". Erdinc et al. (2009) poses the following assumptions
for the bin occupancy filter:
1. The bins are sufficiently small so that each bin is occupied by at most one
target.
2. One target gives rise to only one measurement.
124 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

3. Each target generates measurements independently.


4. False alarms are independent of target originated measurements.
5. False alarms are Poisson distributed.
Here, only point 2 needs some extra treatment if the aim of the algorithm is
mapping and not target tracking. It can be argued that the measurements of
point sources belongs to extended objects and that the aim is to create a map of
those point sources. Also for mapping purposes, the assumption that there will
not be two measurements from the same point at the same time is justified. The
relation described is modeled by a likelihood function p(Zk |Mk|k ), which maps
the Cartesian map to polar point measurements.
So far in this section the discussion has been quite general and the phd or the
intensity has only been considered as a surface over the surveillance region. The
first practical algorithms to realize the phd filter prediction and measurement
update equations were based on the particle filter (pf), see e.g., Vo et al. (2003);
Sidenbladh (2003), where the phd is approximated by a large set of random sam-
ples (particles). A Gaussian mixture approximation of the phd (gm-phd) was
proposed by Vo and Ma (2006). The mixture is represented by a sum of weighted
Gaussian components and in particular the mean and covariance of those compo-
nents are propagated by the Kalman filter. In this work we represent the intensity
by a Gaussian mixture, since the parametrization and derivation is simpler than
for a particle filter based solution. The modeling of the intensity through a num-
ber of Gaussian components also makes it simpler to account for structures in the
map. We will return to these structures in the next two sections.
The gm-phd filter estimates the posterior intensity, denoted Dk|k , as a mixture of
Gaussian densities as,
Jk|k  
(i) (i) (i)
X
Dk|k = wk|k N mk|k , Pk|k , (63)
i=1
(i)
where Jk|k is the number of Gaussian components and wk|k is the expected number
 
(i) (i)
of point sources covered by the density N mk|k , Pk|k . In Lundquist et al. (2011a)
it is shown how the intensity is estimated with the gm-phd filter. The Gaussian
(i) (i)
components are parametrized by a mean mk|k and a covariance Pk|k , which are
expressed in a planar Cartesian coordinate frame, according to
(i) (i) T
h i
mk = x(i)
k y k
. (64)

The aim of the mapping algorithm is to estimate the posterior density (3). The
considered intensity based map is continuous over the surveillance region, thus,
for the number of elements in (1) it holds that Nm → ∞. Furthermore, the inten-
sity is a summary statistic of the map according to
p(Mk |Z1:k ) ∼ p(Mk ; Dk|k ), (65)
7 Conclusion 125

see e.g., Mahler (2003), and the estimated intensity Dk|k is parametrized by
 
(i) (i) (i) (i)
µk , wk , mk , Pk (66)

of the Gaussian sum (63). The intensity based map is a multimodal surface with
peaks around areas with many sensor reflections or point sources. It is worth
observing that the map M is described by a location based function (63), with
feature based parametrization (66).
Experiments were conducted with a prototype passenger car. One example of the
estimated intensity at a freeway traffic scenario is shown as a bird’s eye view in
Figure 7c. Darker regions illustrate higher concentrations of point sources, which
in this figure stem from the guardrails to the left and the right of the road. As
expected, the path of the ego vehicle, indicated by the black dots, is in between
the two regions of higher object concentration. The driver’s view is shown in
Figure 7a.
A second example is shown in Figure 7d and 7b. Here, the freeway exit is clearly
visible in the intensity map, which shows that the proposed method to create
maps is very conformable.
The Gaussian components are generally removed from the filter when the vehicle
passed those parts of the map. However, to give a more comprehensive overview,
these components are stored and the resulting intensity based map is shown to-
gether with an occupancy grid map (ogm) and a flight photo in Figure 8. The top
figure is the map produced as described in this section. The ogm, described in
the precious Section 5, is based on the same data set and used as a comparison.
The gray-level of the ogm indicates the probability of occupancy, the darker the
grid cell the more likely it is to be occupied. As seen in the figure the road edges
are not modeled as distinct with the ogm. The ogm representation of the map is
not very efficient, since huge parts of the map are gray indicating that nothing is
known about these areas. An ogm matrix with often more than 10000 elements
must be updated and communicated to other safety functions of a car at each
time step. The compact representation is an advantage of the intensity based
map. Each Gaussian components is parametrized with 7 scalar values according
to (66). Since most maps are modeled with 10 − 30 components it summarizes to
around 70 − 210 scalar values, which easily can be sent on the vehicles can bus
to other safety functions. Finally, the bottom photo is a very accurate flight photo
(obtained from the Swedish mapping, cadastral and land registration authority),
which can be used as ground truth to visualize the quality of the intensity based
map.

7 Conclusion
The use of radar, laser and camera for situation awareness is gaining popularity
in automotive safety applications. In this chapter it has been shown how sensor
data perceived from the ego vehicle is used to estimate a map describing the local
126 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

(a) Camera view 1 (b) Camera view 2

(c) Intensity map 1 (d) Intensity map 2

Figure 7: The image in (a) shows the driver’s view of the intensity map in (c),
and the image in (b) is the driver’s view of the intensity map in (d). The
darker the areas in the intensity map, the higher the concentration of objects.
The drivers path is illustrated with black dots and may be used as a reference.
Note that snap shot in (d) and (b) is obtained only some meters after the
situation shown in Figure 1.
7 Conclusion 127

50
100
150
100 200 300 400 500 600 700 800 900 1000 1100

Figure 8: The top figure shows the intensity based map obtained from radar
measurements collected on a freeway. The ogm in the middle figure serves
as a comparison of an existing algorithm. The bottom figure is a flight photo
used as ground truth, where the driven trajectory is illustrated with a dashed
line (©Lantmäteriet Gävle 2010. Medgivande I 2011/0100. Reprinted with
permission). Note that the drivers view at 295 meter is shown in Figure 7b,
and about the same position is also shown in Figure 1 and Figure 5.

surroundings of a vehicle. The map may be modeled in various different ways,


of which four major approaches have been described. In a feature based map
each element of the map specifies the properties and location of one object. This
can either be a point source in the space; or it can be an extended object such as
the position and shape of the lane or the road edges. Furthermore, in a location
based map the index of each element corresponds to a location and the value
of the map element describes the property in that position. One example is the
occupancy grid map, which is defined over a continuous space but discretized
with a grid approximation. Another example is the intensity based map, which
is a continuous approximation, describing the density of objects in the map. The
four approaches presented in this chapter have all been evaluated on real data
from both freeways and rural roads in Sweden.
The current accuracy of gps receivers is acceptable only for route guidance, where
the provided global position is sufficient. For automotive active safety systems,
the local position of the ego vehicle with respect to its surroundings is more im-
portant. The estimated maps, described in this chapter, can be used to increase
the localization accuracy of the ego vehicle. Furthermore, the maps may be used
to derive a collision free trajectory for the vehicle.
128 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

Bibliography
Å Björck. Numerical methods for least squares problems. SIAM, Philadelphia,
PA, USA, 1996.
S. S. Blackman and R. Popoli. Design and Analysis of Modern Tracking Systems.
Artech House, Norwood, MA, USA, 1999.
J. Borenstein and Y. Koren. The vector field histogram-fast obstacle avoidance for
mobile robots. IEEE Transactions on Robotics and Automation, 7(3):278–288,
June 1991.
M. Buehler, K. Iagnemma, and S. Singh, editors. Special Issue on the 2007 DARPA
Urban Challenge, Part I-III, volume 25 (8–10). Journal of Field Robotics, 2008.
D. Caveney. Cooperative vehicular safety applications. IEEE Control Systems
Magazine, 30(4):38–53, August 2010.
J. Civera, A. J. Davison, and J. Montiel. Inverse depth parametrization for monoc-
ular SLAM. IEEE Transactions on Robotics, 24(5):932–945, October 2008.
D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes.
Vol. 1 , Elementary theory and method. Springer, New York, NY, USA, 2 edition,
2003.
A. J. Davison, I. Reid, N. Molton, and O. Strasse. MonoSLAM: Real-time single
camera SLAM. IEEE Transactions on Patterns Analysis and Machine Intelli-
gence, 29(6):1052–1067, June 2007.
E. D. Dickmanns. Dynamic computer vision for mobile robot control. In Proceed-
ings of the International Symposium on Industrial Robots, Sydney, Australia,
November 1988.
E. D. Dickmanns. Dynamic Vision for Perception and Control of Motion.
Springer, London, UK, 2007.
E. D. Dickmanns and B. D. Mysliwetz. Recursive 3-D road and relative ego-state
recognition. IEEE Transactions on pattern analysis and machine intelligence,
14(2):199–213, February 1992.
R. Diversi, R. Guidorzi, and U. Soverini. Kalman filtering in extended noise envi-
ronments. IEEE Transactions on Automatic Control, 50(9):1396–1402, Septem-
ber 2005.
R. O. Duda and P. E. Hart. Use of the hough transformation to detect lines and
curves in pictures. Communications of the ACM, 15(1):11–15, January 1972.
A. Eidehall, J. Pohl, F. Gustafsson, and J. Ekmark. Toward autonomous collision
avoidance by steering. IEEE Transactions on Intelligent Transportation Sys-
tems, 8(1):84–94, March 2007.
A. Elfes. Sonar-based real-world mapping and navigation. IEEE Journal of
Robotics and Automation, 3(3):249–265, June 1987.
Bibliography 129

O. Erdinc, P. Willett, and Y. Bar-Shalom. The bin-occupancy filter and its con-
nection to the PHD filters. IEEE Transactions on Signal Processing, 57(11):
4232–4246, November 2009.
K. Gilholm and D. Salmond. Spatial distribution model for tracking extended
objects. IEE Proceedings of Radar, Sonar and Navigation, 152(5):364–371, Oc-
tober 2005.
P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. Inertia-controlling
methods for general quadratic programming. SIAM Review, 33(1):1–36, March
1991.
D. Goldfarb and A. Idnani. A numerically stable dual method for solving strictly
convex quadratic programs. Mathematical Programming, 27(1):1–33, 1983.
N. J. Gordon, D. J. Salmond, and A. F. M. Smith. Novel approach to
nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings on Radar
and Signal Processing, 140(5):107–113, April 1993.
A. Guiducci. Parametric model of the perspective projection of a road with ap-
plications to lane keeping and 3D road reconstruction. Computer Vision and
Image Understanding, 73(3):414–427, March 1999.
A. Guiducci. Camera calibration for road applications. Computer Vision and
Image Understanding, 79(2):250–266, August 2000.
C. Harris and M. Stephens. A combined corner and edge detector. In Proceedings
of the Alvey Vision Conference, pages 147–151, Manchester, UK, September
1988.
P. V. C. Hough. A method and means for recognizing complex patterns. U.S.
Patent No. 3069654, December 1962.
C. R. Jung and C. R. Kelber. Lane following and lane departure using a linear-
parabolic model. Image and Vision Computing, 23(13):1192–1202, November
2005.
Z. W. Kim. Robust lane detection and tracking in challenging scenarios. IEEE
Transactions on Intelligent Transportation Systems, 9(1):16–26, March 2008.
J. W. Lee. A machine vision system for lane-departure detection. Computer vision
and image understanding, 86(1):52–78, April 2002.
D. G. Lowe. Distinctive image features from scale-invariant keypoints. Interna-
tional Journal of Computer Vision, 60(2):91–110, November 2004.
C. Lundquist and T. B. Schön. Joint ego-motion and road geometry estimation.
Information Fusion, 2010. DOI: 10.1016/j.inffus.2010.06.007.
C. Lundquist, L. Hammarstrand, and F. Gustafsson. Road intensity based map-
ping using radar measurements with a probability hypothesis density filter.
IEEE Transactions on Signal Processing, 59(4):1397–1408, April 2011a.
130 Paper A Situational Awareness and Road Prediction for Trajectory Control App.

C. Lundquist, U. Orguner, and F. Gustafsson. Extended target tracking using


polynomials with applications to road-map estimation. IEEE Transactions on
Signal Processing, 59(1):15–26, January 2011b.

C. Lundquist, T. B. Schön, and F. Gustafsson. Situational awareness and road


prediction for trajectory control applications. In A. Eskandarian, editor, Hand-
book of Intelligent Vehicles, chapter 24. Springer, November 2011c.

R. P. S. Mahler. Multitarget Bayes filtering via first-order multitarget moments.


IEEE Transactions on Aerospace and Electronic Systems, 39(4):1152–1178, Oc-
tober 2003.

R. P. S. Mahler. Statistical Multisource-Multitarget Information Fusion. Artech


House, Boston, MA, USA, 2007.

J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust wide baseline stereo from
maximally stable extremal regions. Image and Vision Computing, 22(10):731–
767, September 2004.

J. C. McCall and M. M. Trivedi. Video-based lane estimation and tracking for


driver assistance: Servey, system, and evaluation. IEEE Transactions on Intel-
ligent Transportation Systems, 7(1):20–37, March 2006.

H. Moravec. Sensor fusion in certainty grids for mobile robots. AI Magazine, 9


(2):61–74, 1988. ISSN 0738-4602.

D. Nistér, O. Naroditsky, and J. Bergen. Visual odometry for ground vehicle ap-
plications. Journal of Field Robotics, 23(1):3–20, January 2006.

M. J. D. Powell. On the quadratic programming algorithm of Goldfarb and idnani.


Mathematical Programming Study, 25(1):46–61, October 1985.

H. Rohling and M.-M. Meinecke. Waveform design principles for automotive


radar systems. In Proceedings on CIE International Conference on Radar,
pages 1–4, Beijing, China, October 2001.

H. Rohling and C. Möller. Radar waveform for automotive radar systems and
applications. In IEEE Radar Conference, pages 1–4, Rome, Italy, May 2008.

H. Sidenbladh. Multi-target particle filtering for the probability hypothesis den-


sity. In Proceedings of the International Conference on Information Fusion,
volume 2, pages 800–806, Cairns, Australia, March 2003.

T. Söderström. Survey paper: Errors-in-variables methods in system identifica-


tion. Automatica, 43(6):939–958, June 2007.

R. Szeliski. Computer vision : algorithms and applications. Springer, New York,


NY, USA, 2010.

S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. The MIT Press, Cam-
bridge, MA, USA, 2005.
Bibliography 131

B.-N. Vo and W.-K. Ma. The Gaussian mixture probability hypothesis density
filter. IEEE Transactions on Signal Processing, 54(11):4091–4104, November
2006.
B.-N. Vo, S. Singh, and A. Doucet. Random finite sets and sequential Monte Carlo
methods in multi-target tracking. In Proceedings of the International Radar
Conference, pages 486–491, Adelaide, Australia, September 2003.
T. D. Vu, O. Aycard, and N. Appenrodt. Online localization and mapping with
moving object tracking in dynamic outdoor environments. In Proceedings
of the IEEE Intelligent Vehicles Symposium, pages 190–195, Istanbul, Turkey,
June 2007.
Y. Wang, L. Bai, and M. Fairhurst. Robust road modeling and tracking using
condensation. IEEE Transactions on Intelligent Transportation Systems, 9(4):
570–579, December 2008.
A. Waxman, J. LeMoigne, L. Davis, B. Srinivasan, T. Kushner, Eli Liang, and T. Sid-
dalingaiah. A visual navigation system for autonomous land vehicles. IEEE
Journal of Robotics and Automation, 3(2):124–141, April 1987.
Y. Zhou, R. Xu, X. Hu, and Q. Ye. A robust lane detection and tracking method
based on computer vision. Measurement science and technology, 17(4):736–
745, April 2006.
Paper B
Joint Ego-Motion and Road Geometry
Estimation

Authors: Christian Lundquist and Thomas B. Schön

Edited version of the paper:


C. Lundquist and T. B. Schön. Joint ego-motion and road geometry
estimation. Information Fusion, 12:253–263, October 2011.

The paper presents data that was previously published in:


C. Lundquist and T. B. Schön. Road geometry estimation and vehicle
tracking using a single track model. In Proceedings of the IEEE In-
telligent Vehicles Symposium, pages 144–149, Eindhoven, The Nether-
lands, June 2008.

Preliminary version:
Technical Report LiTH-ISY-R-2844, Dept. of Electrical Engineering, Lin-
köping University, SE-581 83 Linköping, Sweden.
Joint Ego-Motion and Road Geometry
Estimation

Christian Lundquist and Thomas B. Schön

Dept. of Electrical Engineering,


Linköping University,
SE–581 83 Linköping, Sweden
[email protected], [email protected]

Abstract
We provide a sensor fusion framework for solving the problem of joint
ego-motion and road geometry estimation. More specifically we em-
ploy a sensor fusion framework to make systematic use of the mea-
surements from a forward looking radar and camera, steering wheel
angle sensor, wheel speed sensors and inertial sensors to compute
good estimates of the road geometry and the motion of the ego vehicle
on this road. In order to solve this problem we derive dynamical mod-
els for the ego vehicle, the road and the leading vehicles. The main
difference to existing approaches is that we make use of a new dy-
namic model for the road. An extended Kalman filter is used to fuse
data and to filter measurements from the camera in order to improve
the road geometry estimate. The proposed solution has been tested
and compared to existing algorithms for this problem, using measure-
ments from authentic traffic environments on public roads in Sweden.
The results clearly indicate that the proposed method provides better
estimates.

1 Introduction
We are in this paper concerned with the problem of integrated ego-motion and
road geometry estimation using information from several sensors. The sensors
used to this end are a forward looking camera and radar, together with inertial
sensors, a steering wheel sensor and wheel speed sensors. The solution is ob-
tained by casting the problem within an existing sensor fusion framework. An im-
portant part of this solution is the nonlinear state-space model. The state-space
model contains the dynamics of the ego vehicle, the road geometry, the leading
vehicles and the measurement relations. It can then be written in the form
xk+1 = f (xk , uk ) + wk , (1a)
yk = h(xk , uk ) + ek , (1b)

135
136 Paper B Joint Ego-Motion and Road Geometry Estimation

where xk ∈ Rnx denotes the state vector, uk ∈ Rnu denotes the input signals,
yk ∈ Rny denotes the measurements, wk ∈ Rnw and ek ∈ Rne denote the process
and measurement noise, respectively. The process model equations, describing
the evolution of the state over time are denoted by f : Rnx × Rnu → Rnx . Fur-
thermore, the measurement model describing how the measurements from the
vision system, the radar and the inertial sensors relate to the state is given by
h : Rnx × Rnu → Rny . When we have a model in the form (1) we have transformed
the problem into a standard nonlinear state estimation problem, where the task is
to compute estimates of the state based on the information in the measurements.
There are many different ways of solving this problem and we will in this work
make use of the popular Extended Kalman Filter (ekf), described in e.g., Smith
et al. (1962); Schmidt (1966); Anderson and Moore (1979).

The problem studied in this paper is by no means new, it is the proposed solution
that is new. For some early, still very interesting and relevant work on this prob-
lem we refer to Dickmanns and Zapp (1986); Dickmanns and Mysliwetz (1992).
From the camera we can produce estimates of the road geometry based on mea-
surements of the lane markings. This problem is by now rather mature, see e.g.,
the survey McCall and Trivedi (2006) and the recent book Dickmanns (2007) for
solid accounts. The next step in the development was to make use of the radar
information as well. Using radar measurements we can track the leading vehicles,
that is, we can estimate the position and velocity of the leading vehicles. Under
the assumption that the leading vehicles drive on the same road as the ego vehicle,
their positions contain valuable information about the road geometry. This idea
was introduced by Zomotor and Franke (1997); Gern et al. (2000, 2001) and has
been further refined in Eidehall et al. (2007); Eidehall (2007). The combination of
radar and vision as well as the advantages and disadvantages of these sensors are
discussed in Hofmann et al. (2000, 2003). Furthermore, the ego vehicle model
in Hofmann et al. (2000, 2003) is comparable with the one used in the present
work. The four wheel speeds are used to estimate the path of the ego vehicle,
which unlike the present work is separated from the leading vehicles dynamics
and the lane estimate.

The leading vehicles are used to improve the road geometry in the present work;
however the opposite is also possible as the recent work Schubert et al. (2009);
Weigel et al. (2009) shows, where the vehicle detection algorithm benefits from
the lane information. Vision and radar are used in Schubert et al. (2009), whereas
vision and lidar are used in Weigel et al. (2009). In Muller et al. (2009) lidar is
used to detect the leading vehicle, and the movement of the leading vehicle is
then used to estimate the lane and the driven path, which in turn is used to
autonomously follow this vehicle. This works well even for curved and narrow
roads. Unmarked and winding rural roads may be hard to detect, recent research
in this area is presented in Loose et al. (2009), where stereo vision and image
radar are used within a marginalized particle filter to obtain 3D information and
improve the task of lane recognition. Information obtained from road-side struc-
tures may be used to improve the estimate of the lane shape and the position
of the vehicle within the lane, as showed in Watanabe et al. (2009), where only a
2 Sensor Fusion 137

monocular camera is used. Furthermore, at construction sites it is hard to identify


the temporary lanes, a method for this using color images and beacon extraction
is presented in Gumpp et al. (2009). In Wedel et al. (2008) the authors present an
algorithm for free space estimation, capable of handling non-planar roads, using
a stereo camera system.

Lane tracking has also been tackled using radar sensors, see e.g., Kaliyaperu-
mal et al. (2001); Lakshmanan et al. (1997); Nikolova and Hero (2000); Ma et al.
(2000) and laser sensors, see e.g. Wijesoma et al. (2004). There have been sev-
eral approaches making use of reflections from the road boundary, such as crash
barriers and reflection posts, to compute information about the free space, see
e.g. Kirchner and Heinrich (1998); Kirchner and Ameling (2000); Sparbert et al.
(2001) for some examples using laser scanners and Lundquist et al. (2009), where
radar is used.

To summarize, our approach is able to improve the performance by making use


of a dynamic model of the ego vehicle and a new dynamic model of the road at
the same time as we make use of the motion of the leading vehicles. The new
road process model describes the curvature of the ego vehicle’s currently driven
path. This should be compared with existing road models, used in most of the
publications mentioned above, where the road’s curvature is modeled according
to road construction standards. The advantage of our new road model is that
we are able to directly include information of the ego vehicles motion into the
estimate of the road geometry.

In the subsequent section we provide a brief introduction to the sensor fusion


framework we work with and explain how the present problem fits into this
framework. An essential part of this framework is the dynamical model (1a),
which is derived in Section 3. Furthermore, the corresponding measurement
model (1b) is introduced in Section 4. In Section 5 the proposed solution is
evaluated using measurements from real and relevant traffic environments from
public roads in Sweden. Finally, the conclusions are given in Section 6. For con-
venience we provide a list of the relevant notation in the appendix.

2 Sensor Fusion
In order to successfully solve the problem under study in this work it is impera-
tive to have a good understanding of sensor fusion. Sensor fusion is defined as
the process of using information from several different sensors to compute an
estimate of the state of a dynamical system.

We need a dynamic model and a measurement model in the form (1) in order to
be able to produce an estimate of the state. These models are derived in detail
in Section 3 and Section 4. However, for the sake of the present discussion we
will briefly discuss the model here. The state vector xk consists of three parts
138 Paper B Joint Ego-Motion and Road Geometry Estimation

according to

xE,k 

x 
xk =  R,k  , (2)
xT,k
 

where xE,k denotes the state of the ego vehicle, xR,k denotes the state of the road
and xT,k denotes the state of one leading vehicle (also referred to as a target). In
deriving the evolution of these states over time we will end up with continuous-
time differential equations in the form
ẋ(t) = g(x(t), u(t)). (3)
However, according to (1) we required the model to be in discrete time. The
simplest way of obtaining a difference equation from (3) is to make use of the
standard forward Euler method, which approximates (3) at time t according to
x(t + T ) = x(t) + T g(x(t), u(t)) , f (xt , ut ), (4)
where T denotes the sample time. The measurement model is of course already
in discrete time.

The estimate of the state is computed by a state estimator of some kind. This state
estimator makes use of the measurements from the different sensors to produce
an estimate of the so called filtering probability density function (pdf) p(xk |y1:k ),
where y1:k , {yi }ki=1 denotes all the measurements from time 1 to time k. This
density function contains all there is to know about the state xk , given the in-
formation in the measurements y1:k . Once an approximation of p(xk |y1:k ) is avail-
able it can be used to form many different estimates and the most commonly used
estimate is the conditional mean estimate
x̂k|k = E(xk |y1:k ). (5)
This estimate will be used in the present work as well.

Since we are looking for an algorithm capable of working in real-time it is impor-


tant to understand how the filtering pdf evolves over time. Now, it is well-known
(see e.g., Jazwinski (1970)) that a sequential solution can be obtained according
to
p(yk |xk )p(xk |y1:k−1 )
p(xk |y1:k ) = R , (6a)
p(yk |xk )p(xk |y1:k−1 )dxk
Z
p(xk+1 |y1:k ) = p(xk+1 |xk )p(xk |y1:k )dxk . (6b)

Here, it is also worth mentioning that since we have assumed additive noise in
the model (1), we have explicit expressions for p(xk+1 |xk ) and p(yk |xk ) according
to
p(xk+1 |xk ) = pwk (xk+1 − f (xk , uk )), (7a)
p(yk |xk ) = pek (yk − h(xk , uk )), (7b)
3 Dynamic Models 139

where pwk ( · ) and pek ( · ) denote the pdf’s for the process and the measurement
noise, respectively.

In the special case, where the equations in the model (1) are linear and the noise is
Gaussian, the multidimensional integrals in (6) allows for an analytical solution,
the Kalman filter Kalman (1960). For a derivation of this kind, see e.g., Schön
(2006). However, the problem is that for the general nonlinear, non-Gaussian
case that we are facing, there does not exist any closed form solution to (6). Hence,
we are forced to make approximations of some kind. The most commonly used
approximation is provided by the extended Kalman filter (ekf). The idea underly-
ing the ekf is very simple, approximate the nonlinear model with a linear model
subject to Gaussian noise and apply the Kalman filter to this approximation. For
a solid account of the ekf we refer to Anderson and Moore (1979); Kailath et al.
(2000). Lately the so called particle filter, introduced in Gordon et al. (1993), has
become increasingly popular. This filter often provides a better solution, but it
typically requires much more computational effort. For the present application
the ekf provides an approximation that is good enough. For a more thorough
account of the framework for nonlinear estimation briefly introduced above we
refer to Schön (2006).

Before we end our brief overview on the sensor fusion problem it is important
to stress that a successful sensor fusion framework will, besides the modeling
and filtering parts mentioned above, rely on a certain surrounding infrastruc-
ture. This surrounding infrastructure deals with issues such as time synchro-
nization between the various sensors, calibration, sensor-near signal processing,
track handling, etc. This part of the framework should not be overlooked and
a solid treatment of the provided infrastructure is accounted for in Bengtsson
and Danielsson (2008) for the problem at hand. Despite this it is worth mention-
ing that the leading vehicles are incorporated into the estimation problem using
rather standard techniques from target tracking, such as nearest neighbor data
association and track counters in order to decide when to stop tracking a certain
vehicle, etc. These are all important parts of the system we have implemented,
but it falls outside the scope of this paper and since the techniques are rather
standard we simply refer to the general treatments given in e.g., Blackman and
Popoli (1999); Bar-Shalom et al. (2001).

3 Dynamic Models
As mentioned in the introduction our sensor fusion framework needs a state-
space model describing the dynamics of the ego vehicle, the road and the leading
vehicles. In this section we will derive the differential equations describing the
motion of the ego vehicle (Section 3.2), the road (Section 3.3) and the leading vehi-
cles (Section 3.4), also referred to as targets. Finally, in Section 3.5 we summarize
these equations and form the process model of the state-space model. However,
before we embark on deriving these equations we introduce the overall geometry
and some necessary notation in Section 3.1.
140 Paper B Joint Ego-Motion and Road Geometry Estimation

x ψTn lVn

OTn
x
Tn y ψVn
OV
Vn
y
lf
ls
dW dW x
Tn W VW
ψE
OE lr
E lb
y

y W dW
EW

OW x

Figure 1: Coordinate frames describing the ego vehicle, with center of grav-
ity in OE and the radar and camera sensors mounted in OV . One leading
vehicle is positioned in OTn .

3.1 Geometry and Notation


The coordinate frames describing the ego vehicle and one leading vehicle are
defined in Figure 1. The inertial world reference frame is denoted by W and
its origin is OW . The ego vehicle’s coordinate frame E is located in the center
of gravity (CoG). Furthermore, Vn is associated to the observed leading vehicle
n, with OV at the vision and radar sensor of the ego vehicle. Finally, Tn is also
associated with the observed and tracked leading vehicle n, but its origin OTn
is located at the leading vehicle. In this work we will use the planar coordinate
transformation matrix
" #
WE cos ψE − sin ψE
R = (8)
sin ψE cos ψE
to transform a vector, represented in E, into a vector, represented in W , where
the yaw angle of the ego vehicle ψE is the angle of rotation from W to E. The
W
geometric displacement vector dEW is the direct straight line from OW to OE
represented with respect to the frame W . Velocities are defined as the movement
of a frame E relative to the inertial reference frame W , but typically resolved in
the frame E, for example vxE is the velocity of the E frame in its x-direction. The
same convention holds for the acceleration axE . In order to simplify the notation
3 Dynamic Models 141

we leave out E when referring to the ego vehicle’s velocity and acceleration. This
notation will be used when referring to the various coordinate frames. However,
certain frequently used quantities will be renamed, in the interest of readability.
The measurements are denoted using superscript m. Furthermore, the notation
used for the rigid body dynamics is in accordance with Hahn (2002).

3.2 Ego Vehicle


We will only be concerned with the ego vehicle motion during normal driving
situations and not at the adhesion limit. This implies that the single track model
is sufficient for the present purposes. This model is also referred to as the bicy-
cle model, see e.g., Mitschke and Wallentowitz (2004); Wong (2001) for a solid
treatment. The geometry of the single track model with slip angles is shown in
Figure 2. It is here worth to point out that the velocity vector of the ego vehicle
is typically not in the same direction as the longitudinal axis of the ego vehicle.
Instead the vehicle will move along a path at an angle β with the longitudinal
direction of the vehicle. Hence, the angle β is defined as,
vy
tan β = , (9)
vx
where vx and vy are the ego vehicle’s longitudinal and lateral velocity components,
respectively. This angle β is referred to as the float angle Robert Bosch GmbH
(2004) or the vehicle body side slip angle Kiencke and Nielsen (2005).
The slip angle αi is defined as the angle between the central axis of the wheel
and the path along which the wheel moves. The phenomenon of side slip is
mainly due to the lateral elasticity of the tire. For reasonably small slip angles,
at maximum 3 deg, it is a good approximation to assume that the lateral friction
force of the tire Fi is proportional to the slip angle,
Fi = Cαi αi . (10)
The parameter Cαi is called cornering stiffness and describes the cornering be-
havior of the tire. The load transfer to the front axle when braking or to the outer
wheels when driving trough a curve influences the parameter value. A model
considering these influences is given in Lundquist and Schön (2009).
Following this brief introduction to the ego vehicle geometry, we are now ready
to give an expression describing the evolution of yaw angle ψE and the float angle
β over time

−Cαf lf cos δf + Cαr lr Cαf lf2 cos δf + Cαr lr2 Cαf lf tan δf
ψ̈E = β − ψ̇E + , (11a)
Izz Izz vx Izz
Cαf cos δf + Cαr + v̇x m Cαf lf cos δf − Cαr lr Cαf sin δf
!
β̇ = −β − ψ̇E 1+ 2
+ ,
mvx vx m mvx
(11b)
where m denotes the mass of the vehicle and Izz denotes the moment of inertia
of the vehicle about its vertical axis in the center of gravity. These single track
142 Paper B Joint Ego-Motion and Road Geometry Estimation

ρ αf
δf
v
β
ψE y W
CoG
αr
OW x

Figure 2: In the single track model the wheels on each axle are modeled as
single units. The velocity vector v, with the float angle β to the longitudinal
axis of the vehicle, is attached at the center of gravity. Furthermore, the
wheel slip angles are referred to as αf and αr . The front wheel angle is
denoted by δf and the current radius is denoted by ρ.

model equations are well-known in the literature, see e.g., Kiencke and Nielsen
(2005).

3.3 Road Geometry


We start this section by defining the road variables and expressing a typical way
to parameterize a road. The section is continued with a derivation of a new model
for the road that makes use of the dynamic motion of the ego vehicle.
Background

The most essential component in describing the road geometry is the curvature
c, which we will define as the curvature of the white lane marking to the left of
the ego vehicle. An overall description of the road geometry is given in Figure 3.
The heading angle ψR is defined as the tangent of the road at the level of the ego
vehicle in the world reference frame W , see Figure 4. The angle δr is the angle
between the tangent of the road curvature and the longitudinal axis of the ego
vehicle. Note that this angle can be measured by sensors mounted on the ego
vehicle. Furthermore, we define δR as
δR , δr − β, (12)
i.e., the angle between the ego vehicles direction of motion (velocity vector) and
the road curvature tangent.
The road curvature c is typically parameterized according to
c(xc ) = c0 + c1 xc , (13)
3 Dynamic Models 143

w
x
lTn OTn R
xT R
δTn n
Tn y

lVn

1
c
x x
δ Vn
R lR
OVn
y OR
Vn y
x
y W
E OE
y
OW x

Figure 3: Relations between one leading vehicle in OTn , the ego vehicle and
the road. The distance between the ego vehicle’s longitudinal x-axis and the
white lane to its left is lR (t). The leading vehicle’s distance to the lane mark-
ing is lTn and its heading angle in the road frame R is δTn . The lane width is
w.

where xc is the position along the road in a road aligned coordinate frame and
xc = 0 at the vehicles center of gravity. Furthermore, c0 describes the local cur-
vature at the ego vehicle position and c1 is the distance derivative (hence, the
rate of change) of c0 . It is common to make use of a road aligned coordinate
frame when deriving an estimator for the road geometry, a good overview of this
approach is given in Eidehall (2007). There are several advantages using road
aligned coordinate frames, particularly the motion models of the other vehicles
on the same road can be greatly simplified. However, the flexibility of the motion
models is reduced and basic dynamic relations such as Newton’s and Euler’s laws
cannot be directly applied. Since we are using a single track model of the ego
vehicle, we will make use of a Cartesian coordinate frame. A good polynomial
approximation of the shape of the road curvature is given by
c c
yE = lR + xE tan δr + 0 (xE )2 + 1 (xE )3 , (14)
2 6
144 Paper B Joint Ego-Motion and Road Geometry Estimation

ρ= 1 dlR
c
dψR duR du

E +d
y W ψR β
ψE + β

OW x

Figure 4: Infinitesimal segments of the road curvature duR and the driven
path du are shown together with the angles δR = ψR − (ψE + β).

where lR (t) is defined as the time dependent distance between the ego vehicle and
the lane marking to the left, see e.g., Dickmanns and Mysliwetz (1992); Eidehall
(2007).
The following dynamic model is often used for the road
ċ0 = vc1 , (15a)
ċ1 = 0, (15b)
which can be interpreted as a velocity dependent integration. It is interesting
to note that (15) reflects the way in which roads are commonly built Dickmanns
and Mysliwetz (1992). However, we will now derive a new dynamic model for
the road, that makes use of the road geometry introduced above.
A New Dynamic Road Model
Assume that duR is an infinitesimal part of the road curvature or an arc of the
road circle with the angle dψR , see Figure 4. A segment of the road circle can be
described as
1
duR = dψR , (16)
c0
which after division with the infinitesimal change in time dt is given by
duR 1 dψR
= . (17)
dt c0 dt
Assuming that the left hand side can be reformulated according to
duR
= vx cos (ψR − ψE ) ≈ vx , (18)
dt
this yields
1
vx = ψ̇ . (19)
c0 R
3 Dynamic Models 145

The angle ψR can be expressed as


ψR = ψE + β + δR , (20)
by rewriting (12). Re-ordering equation (19) and using the derivative of (20) to
substitute ψ̇R yields
δ̇R = c0 vx − (ψ̇E + β̇), (21)
which by substituting β̇ with (11b) according to
−Cαf cos δf − Cαr − v̇x m
δ̇R = c0 vx − β
mvx
Cαf lf cos δf − Cαr lr Cαf sin δf
+ ψ̇E − (22)
v2x m mvx
results in a differential equation of the road angle δR . A similar relation has been
used in Dickmanns and Mysliwetz (1992); Litkouhi et al. (1993).

We also need a differential equation for the road curvature, which can be found
by differentiating (21) w.r.t. time,
δ̈R = ċ0 vx + c0 v̇x − ψ̈E − β̈. (23)
From the above equation we have
δ̈R + ψ̈E + β̈ − c0 v̇x
ċ0 = . (24)
vx
Let us assume that δ̈R = 0. Furthermore, differentiating β̇, from (11b), w.r.t. time
and inserting this together with ψ̈E , given in (11a), into the above expression
yields the differential equation
1

ċ0 = C 2 (Izz + lr2 m)(−ψ̇E lr + βvx ) + Cαf
2
(Izz + lf2 m)(ψ̇E lf + (β − δf )vx )
(Izz m2 vx )4 αr
+ Cαr Izz m(−3ψ̇E v̇x lr + 3β v̇x vx + ψ̇E v2x ) + v̇x Izz m2 vx (2β v̇x + vx (ψ̇E − c0 vx ))
+ Cαf (Cαr (Izz + lr (−lf )m)(ψ̇E lb − 2ψ̇E lr + 2βvx − δf vx )

+ Izz m(3ψ̇E v̇x lf + (3β − 2δf )v̇x vx + (δ̇f + ψ̇E )v2x )) (25)

for the road curvature.

In this model c0 is defined at the ego vehicle and thus describes the currently
driven curvature, whereas for the curvature described by the state-space model
(15) and by the polynomial (13) it is not entirely obvious where c0 is defined.

Finally, we need a differential equation describing how the distance lR (t) between
the ego vehicle and the lane markings changes over time. Assume again an in-
finitesimal arc du of the circumference describing the ego vehicle’s curvature. By
contemplating Figure 4 we have
dlR = du sin δR , (26)
146 Paper B Joint Ego-Motion and Road Geometry Estimation

where δR is the angle between the ego vehicle’s velocity vector and the road. Di-
viding this equation with an infinitesimal change in time dt and using du/dt = v
yield the differential equation
l˙R = vx sin (δR + β), (27)
which concludes the derivation of the road geometry model.

3.4 Leading Vehicles

The leading vehicles are also referred to as targets Tn . The coordinate frame
Tn moving with target n has its origin located in OTn , as we previously saw in
Figure 3. It is assumed that the leading vehicles are driving on the road, quite
possibly in a different lane. More specifically, it is assumed that they are follow-
ing the road curvature and thus that their heading is in the same direction as the
tangent of the road.

For each target Tn , there exists a coordinate frame Vn , with its origin OV at the
position of the sensor. Hence, the origin is the same for all targets, but the coordi-
nate frames have different heading angles ψVn . This angle, as well as the distance
lVn , depend on the targets position in space. From Figure 3 it is obvious that,
W
dEW + dVWn E + dTWn V − dTWn W = 0, (28)
or more explicitly,
xW W
EW + ls cos ψE + lVn cos ψVn − xTn W = 0, (29a)
yW W
EW + ls sin ψE + lVn sin ψVn − yTn W = 0. (29b)
Let us now define the relative angle to the leading vehicle as
δVn , ψVn − ψE . (30)
It is worth noticing that this angle can be measured by a sensor mounted on the
vehicle.

The target Tn is assumed to have zero lateral velocity in the Vn frame, i.e., ẏ Vn = 0,
since it is always fixed to the xVn -axis. If we transform this relation to the world
frame W , using the geometry of Figure 1 we have
" #
V W ˙W ·
R · dTn W = , (31)
0
where the top equation of the vector equality is non-descriptive and the bottom
equation can be rewritten as
−ẋW W
Tn W sin ψVn + ẏTn W cos ψVn = 0. (32)
The velocity vector of the ego vehicles is applied in the center of gravity OE . The
derivative of (29) is used together with the velocity components of the ego vehicle
and (32) to get an expression for the derivative of the relative angle to the leading
3 Dynamic Models 147

vehicle w.r.t. time according to


(δ̇Vn + ψ̇E )lVn + ψ̇E ls cos δVn + vx sin(β − δVn ) = 0. (33)
This equation is rewritten, forming the differential equation
ψ̇E ls cos δVn + vx sin(β − δVn )
δ̇Vn = − − ψ̇E (34)
lVn
of the relative angle δ̇Vn to the leading vehicles.

3.5 Summarizing the Dynamic Model


The state-space models derived in the previous sections are nonlinear and they
are given in continuous time. Hence, in order to make use of these equations in
the ekf we will first linearize them and then make use of (4) in order to obtain
a state-space model in discrete time according to (1). This is a rather standard
procedure, see e.g., Gustafsson (2000); Rugh (1996). At each time step k, the non-
linear state-space model is linearized by evaluating the Jacobian (i.e., the partial
derivatives) of the f (xk , uk )-matrix introduced in (4) at the current estimate x̂k|k .
It is worth noting that this Jacobian is straightforwardly computed off-line using
symbolic or numerical software, such as Mathematica. Hence, we will not go
through the details here. However, for future reference we will briefly summarize
the continuous-time dynamic model here.
In the final state-space model the three parts (ego vehicle, road and leading vehi-
cles) of the dynamic model are augmented, resulting in a state vector of dimen-
sion 6 + 4 · (Number of leading vehicles). Hence, the size of the state vector varies
with time, depending on the number of leading vehicles that are tracked at a
specific instance of time.
The ego vehicle model is described by the following states,
h iT
xE = ψ̇E β lR , (35)
i.e., the yaw rate, the float angle and the distance to the left lane marking. The
front wheel angle δf , which is calculated from the measured steering wheel angle,
and the ego vehicle longitudinal velocity vx and acceleration v̇x are modeled as
input signals,
h iT
uk = δf vx v̇x . (36)
The nonlinear state-space model ẋE = gE (x, u) is given by
2 2
 β −Cαf lf cos δf +Cαr lr − ψ̇ Cαf lf cos δf +Cαr lr + Cαf lf tan δf
 

 Izz E
 Izz vx  Izz 
gE (x, u) = −β Cαf cos δf +Cαr +v̇x m − ψ̇ 1 + Cαf lf cos δf −Cαr lr + Cαf sin δf  . (37)
 
 mvx E v2x m mvx 
 
vx sin (δR + β)
The corresponding differential equations were previously given in (11a), (11b)
and (27), respectively.
148 Paper B Joint Ego-Motion and Road Geometry Estimation

The states describing the road xR are the road curvature c0 at the ego vehicle
position, the angle δR between the ego vehicles direction of motion and the road
curvature tangent and the width of the lane w, i.e.,
h iT
xR = c0 δR w . (38)
The differential equations for c0 and δR were given in (25) and (22), respectively.
When it comes to the width of the current lane w, we have
ẇ = 0, (39)
motivated by the fact that w does not change as fast as the other variables, i.e.,
the nonlinear state-space model ẋR = gR (x, u) is given by
ċ0
 
 
C cos δf +Cαr +v̇x m Cαf lf cos δf −Cαr lr Cαf sin δf 
gR (x, u) = c0 vx + β αf

+ ψ̇ −  . (40)

 mvx v2x m mvx 
0
A target is described by the following states, azimuth angle δVn , lateral position
lTn of the target, distance between the target and the ego vehicle lVn and relative
velocity between the target and the ego vehicle l˙Vn . Hence, the state vector is
given by
h iT
xT = δVn lTn l˙Vn lVn . (41)
The derivative of the azimuth angle was given in (34). It is assumed that the lead-
ing vehicle’s lateral velocity is small, implying that l˙Tn = 0 is a good assumption
(compare with Figure 3). Furthermore, it can be assumed that the leading vehicle
accelerates similar to the ego vehicle, thus l¨Vn = 0 (compare with e.g., Eidehall
(2007)). The state-space model ẋT = gT (x, u) of a leading vehicle (target) is
 ψ̇ l cos δ +v sin(β−δ ) 
− E s Vn x Vn
− ψ̇ E

 lVn 
gT (x, u) = 
 0  .

(42)
 0 

l˙Vn
 

Note that the dynamic models given in this section are nonlinear in u.

4 Measurement Model

The measurement model (1b) describes how the measurements yk relates to the
state variables xk . In other words, it describes how the measurements enter the
estimator. We will make use of superscript m to denote measurements. Let us
start by introducing the measurements relating directly to the ego vehicle motion,
by defining
h iT
y1 = ψ̇Em aym , (43)
4 Measurement Model 149

where ψ̇Em and aym are the measured yaw rate and the measured lateral accelera-
tion, respectively. They are both measured with the ego vehicle’s inertial sensor
in the center of gravity (CoG). The ego vehicle lateral acceleration in the CoG is
ay = vx (ψ̇E + β̇) + v̇x β. (44)
By replacing β̇ with the expression given in (11b) and at the same time assuming
that v̇x β ≈ 0 we obtain
ay = vx (ψ̇E + β̇)
Cαf cos δf + Cαr + mv̇x −Cαf lf cos δf + Cαr lr Cαf
= −β + ψ̇E + sin δf . (45)
m mvx m
From this it is clear that the measurement of the lateral acceleration contains
information about the ego vehicle states. Hence, the measurement equation cor-
responding to (43) is given by
 
1  ψ̇E 
h =  Cαf cos δf +Cαr +mv̇x
 −Cαf lf cos δf +Cαr lr Cαf  . (46)
−β m + ψ̇ E mv + m sin δ f

x

The vision system provides measurements of the road geometry and the ego vehi-
cle position on the road according to
h iT
y2 = c0m δrm w m lRm (47)
and the corresponding measurement equations are given by
h iT
h2 = c0 (δR + β) w lR . (48)
An obvious choice would have been to use the state δr , instead of the sum δR + β,
however, we have chosen to split these since we are interested in estimating both
of these quantities.

In order to include measurements of a leading vehicle we require that it is de-


tected both by the radar and the vision system. The range lVn and the range rate
l˙Vn are measured by the radar. The azimuth angle is also measured by the radar,
but not used directly in this framework. Instead, the accuracy of the angle esti-
mate is improved by using the camera information. We will not describe these
details here, since it falls outside the scope of this work. The corresponding mea-
surement vector is
iT
y3 = δVmn l˙Vmn lVmn .
h
(49)
Since these are state variables, the measurement equation is obviously
h iT
h3 = δVn l˙Vn lVn . (50)
The fact that the motion of the leading vehicles reveals information about the
road geometry allows us to make use of their motion in order to improve the
road geometry estimate. This will be accomplished by introducing a nontrivial
150 Paper B Joint Ego-Motion and Road Geometry Estimation

artificial measurement equation according to


c0 lTn
h4 = lR + (δR + β)lVn cos δVn + (lVn cos δVn )2 + , (51)
2 cos δTn
which is derived from Figure 3 and describes the predicted lateral distance of
a leading vehicle in the ego vehicles coordinate frame E. In order to model the
road curvature we introduce the road coordinate frame R, with its origin OR on
the white lane marking to the left of the ego vehicle. This implies that the frame
R is moving with the frame E of the ego vehicle. The angle δTn , ψTn − ψR is
derived by considering the road’s slope at the position of the leading vehicle, i.e.,
dyR
δTn = arctan = arctan c0 xR , (52)
dxR
where xR = xRTn R , see Figure 3. The Cartesian x-coordinate of the leading vehicle
Tn in the R-frame is
cos δVn
xR E
Tn R = xTn E − ls ≈ lVn . (53)
cos δr
The sensors only provide range lVmn and azimuth angle δVmn . Hence, the corre-
sponding quasi-measurement is
y4 = lVmn sin(δVmn ), (54)
describing the measured lateral distance to a leading vehicle in the ego vehicle’s
coordinate frame. This might seem a bit ad hoc at first. However, the validity of
the approach has recently been justified in the literature, see e.g., Teixeira et al.
(2007).

5 Experiments and Results

The experiments presented in this section are based on measurements acquired


on public roads in Sweden during normal traffic conditions. The test vehicle
is a Volvo S80 equipped with a forward looking 77 GHz mechanically scanning
fmcw radar and a forward looking vision sensor (camera), measuring the dis-
tances and angles to the targets. The image sensor includes object and lane de-
tection and provides for example the lane curvature. Information about the ego
vehicle motion, such as the steering wheel angle, yaw rate, etc. were acquired
directly from the can bus.

Before stating the main results in this section we outline how to estimate the pa-
rameters of the ego vehicle and how the filter is tuned. Subsequently we state the
results of the ego vehicle validation. We compare our road curvature estimates
with two other sensor fusion approaches as well as one road model.
5 Experiments and Results 151

0.2
Estimate

Yaw Rate [rad/s]


Measurement
0.1

−0.1

−0.2
0 10 20 30 40 50 60 70 80
Time [s]

4
Lateral Acceleration [m/s2]

Estimate
Measurement
2

−2

−4
0 10 20 30 40 50 60 70 80
Time [s]

Figure 5: Comparing the simulated result of the nonlinear state-space model


(black) with measured data (gray) of a validation data set. The upper plot
shows the yaw rate and the lower shows the lateral acceleration.

5.1 Parameter Estimation and Filter Tuning


Most of the ego vehicle’s parameters, such as the dimensions, the mass and the
moment of inertia were provided by the vehicle manufacturer. Since the corner-
ing stiffness is a parameter which describes the properties between road and tire
it has to be estimated for the given set of measurements. An on-line method
to estimate the cornering stiffness parameter using recursive least square is pre-
sented in Lundquist and Schön (2009). However, in the present work an exhaus-
tive search was accomplished off-line using a batch of measurements to estimate
Cαf and Cαr . A state-space model with the differential equations given in (11a)
and (11b) and with the yaw rate ψ̇E and the float angle β in the state vector was
used for this purpose. Furthermore, the front wheel angle δf and the ego vehicle
longitudinal velocity vx were modeled as input signals. The measurements were
provided by the yaw rate ψ̇Em and the lateral acceleration aym . The correspond-
ing measurement equation was given in (46). The data used to identifying the
cornering stiffness parameters was split into two parts, one estimation part and
one validation part. This facilitates cross-validation, where the parameters are
estimated using the estimation data and the quality of the estimates can then be
assessed using the validation data Ljung (1999).
The approach is further described in Lundquist and Schön (2008b). The resulting
state-space model with the estimated parameters was validated using the valida-
tion data and the result is given in Figure 5.
The process and measurement noise covariances are the design parameters in the
extended Kalman filter (ekf). It is assumed that the covariances are diagonal and
that there are no cross correlations between the measurement noise and the pro-
cess noise. The present filter has ten states and ten measurement signals, which
implies that 20 parameters have to be tuned. The tuning was started using phys-
152 Paper B Joint Ego-Motion and Road Geometry Estimation

ical intuition of the error in the process equations and the measurement signals.
In a second step, the covariance parameters were tuned simply by trying to min-
imize the root mean square error (rmse) of the estimated ĉ0 and the reference
curvature c0 . The estimated curvature was obtained by running the filter using
the estimation data set. The calculation of the reference value is described in Ei-
dehall and Gustafsson (2006). The chosen design parameters were validated on
a different data set and the results are discussed in the subsequent sections.

5.2 Validation Using Ego Vehicle Signals


The state variables of the ego vehicle are according to (35), the yaw rate, the float
angle and the distance to the left lane marking. The estimated and the measured
yaw rate signals are, as expected, very similar. As described in Section 5.1, the
parameters of the vehicle model were optimized with respect to the yaw rate,
hence it is no surprise that the fusion method decreases the residual further. A
measurement sequence acquired on a rural road is shown in Figure 6a. Note that
the same measurement sequence is used in Figures 5 to 7, which will make it
easier to compare the estimated states.
The float angle β is estimated, but there is no reference or measurement sig-
nal to compare it to. An example is shown in Figure 6b. For velocities above
30 − 40 km/h, the float angle appears more or less like the mirror image of the
yaw rate, and by comparing with Figure 6a, we can conclude that the sequence is
consistent.
The measurement signal of the distance to the left white lane marking lRm is pro-
duced by the vision system olr (optical lane recognition). Bad lane markings
or certain weather conditions can cause errors in the measurement signal. The
estimated state lR of the fusion approach is very similar to the pure olr signal.

5.3 Road Curvature Estimation


An essential idea with the sensor fusion approach introduced in this paper is to
make use of the single track ego vehicle model in order to produce better esti-
mates of the road curvature. In this section we will compare this approach to
approaches based on other models of the ego vehicle and the road geometry.
Fusion 1 is the sensor fusion approach shown in this paper.
Fusion 2 is a similar approach, thoroughly described in Eidehall (2007). An
important difference to fusion 1 is that the ego vehicle is modeled with a
constant velocity model, which is less complex. The float angle β is not
estimated. Furthermore, the road is modeled according to (15) and a road
aligned coordinate frame is used. This method is similar to the approaches
used in e.g., Zomotor and Franke (1997); Gern et al. (2000, 2001).
Fusion 3 comprehends the ego vehicle model of fusion 1 and the road model of
fusion 2, i.e., substituting (25) by (15) and introducing the seventh state
c1 . Furthermore, a Cartesian coordinate frame is used. This method, but
5 Experiments and Results 153

0.2
m
Measurement ΨE
0.15
Estimate ψ
E

Yaw Rate ψ [rad/s]


0.1

0.05

−0.05

−0.1

−0.15
0 10 20 30 40 50 60 70 80
Time [s]

(a)

0.03

0.02
Float Angle β [rad]

0.01

−0.01

−0.02

−0.03
0 10 20 30 40 50 60 70 80
Time [s]

(b)

Figure 6: A comparison between the ego vehicle’s measured (gray) and esti-
mated yaw rate (black dashed) using the sensor fusion approach in this pa-
per is shown in (a). The estimated float angle β for the same data sequence
is shown in (b).

without considering the leading vehicles is similar to the ones described in


e.g., Dickmanns and Mysliwetz (1992) and Behringer (1997).
Model is the ego vehicle and road state-space model given in this paper, de-
scribed by the motion models (37) and (40) and the measurement models
(46) and (48), without the extended Kalman filter.
The curvature estimate ĉ0 from the sensor fusion approaches, the model and the
raw measurement from the optical lane recognition are compared to a reference
value. The reference value is computed off-line using a geometric method de-
scribed in Eidehall and Gustafsson (2006), which applies a least square curve
fitting to a sliding window. The entire data set i.e., also future values of the ego
vehicle movement, is used to derive the reference value. The accuracy of the
method was validated on a test track, where the ground truth is well defined,
and the results are good as reported in Eidehall and Gustafsson (2006).
A typical result of a comparison is shown in Figure 7. The data stems from a rural
road, which explains the curvature values. It can be seen that the estimates from
the sensor fusion approaches give better results than using the olr alone, as was
expected. The olr estimate is rather noisy compared to the fused estimates. This
is not surprising, since the raw olr has less information. A camera view from
the curve at time 32 s is shown in Figure 8a.
154 Paper B Joint Ego-Motion and Road Geometry Estimation

−3
x 10
10
Reference
8 Fusion 1
Fusion 2
6 Fusion 3
Model
Curvature [1/m] 4 OLR

−2

−4

−6

−8
0 10 20 30 40 50 60 70 80
Time [s]

Figure 7: Results from the three fusion approaches (fusion 1 solid black line,
fusion 2 gray line and fusion 3 dotted line) and the olr (dashed gray line),
showing the curvature estimate ĉ0 . As can be seen, the curvature estimate
can be improved by taking the other vehicles (gray line) and the ego vehicle’s
driven curvature in to account (solid black line). The model (dash-dotted)
is estimating the derivative of the curvature and the absolute position is not
measured, which leads to the illustrated bias. The dashed line is the refer-
ence curvature.

The curvature estimate from the state-space model described in this paper is de-
noted by model and is shown as a dash-dotted black line. The absolute position
is not measured, which leads to a clearly visible bias in the estimate of c0 . The
bias is transparent in Figure 7, but it also leads to a large rmse value in Table 1.
Fusion 3 also delivers a decent result, but it is interesting to notice that the esti-
mate seems to follow the incorrect olr at time 35 s. The same behavior holds for
fusion 2 in Figure 7.

To get a more aggregate view of the performance, we provide the root mean
square error (rmse) for longer measurement sequences in Table 1. The fusion
approaches improve the road curvature estimate by making use of the informa-
tion about the leading vehicles, that is available from the radar and the vision
systems. However, since we are interested in the curvature estimate also when
there are no leading vehicles in front of the ego vehicle, this case will be studied
as well. It is straightforward to study this case, it is just a matter of not provid-
ing the measurements of the leading vehicles to the algorithms. The rmse values
found without information about the leading vehicles are given in the columns
marked no in Table 1.

These results should ideally be compared to data where information about the
leading vehicles is considered, but during the 78 min drive there were not always
another car in front of us. Only for about 50 % of the time there existed other
vehicles, which we could track. Hence, for the sake of comparability we give the
rmse values for those sequences where at least one leading vehicle was tracked,
bearing in mind that these are based on only about 50 % of the data. The cor-
responding columns in Table 1 are marked only. Finally, we also give the rmse
5 Experiments and Results 155

(a)

(b)

Figure 8: Two different camera views are shown. In (a) the lane markings
are excellent and the leading vehicles are close and clearly visible. This is
the traffic situation at 32 s in the Figures 5 to 7. Although the circumstances
seem perfect, the olr, Fusion 2 and 3 have problems estimating the curva-
ture, as seen in Figure 7. The traffic situation shown in (b) is more demand-
ing, mainly due to the weather conditions and large distance to the leading
vehicle.

values for the complete data, where other vehicles were considered whenever pos-
sible.

It is interesting to see that the advantage of fusion 1, which uses a more accurate
ego vehicle and road model, in comparison to fusion 2 is particularly noticeable
when driving alone on a rural road, the rmse for fusion 1 is then 1.18, whereas
the rmse for fusion 2 is 2.91. The reason for this is first of all that we are driving
on a rather curvy road which implies that any additional information will help
improving the curvature estimate. Here, the additional information is the im-
proved ego vehicle and road models used in fusion 1. Furthermore, the fact that
there are no leading vehicles that could aid the fusion algorithm when driving
alone creates a greater disadvantage for fusion 2, since it is its main additional
156 Paper B Joint Ego-Motion and Road Geometry Estimation

Table 1: Comparison of the root mean square error (rmse) of the road curva-
ture c0 in [1/m] for the three fusion approaches and the pure measurement
signal olr for two longer measurement sequences acquired on public roads
in Sweden. Three cases were considered, using only those measurements
where a leading vehicle could be tracked, using the knowledge of the lead-
ing vehicles position whenever possible or not at all and thereby simulating
the lonely driver. Note that all rmse values should be multiplied by 10−3 .

Highway Rural road


Time 44 min 34 min
olr [10−3 /m] 0.385 3.60
Model [10−3 /m] 0.356 2.10
Leading vehicles used? only possible no only possible no
Fusion 1 [10−3 /m] 0.176 0.184 0.189 1.48 1.13 1.18
Fusion 2 [10−3 /m] 0.231 0.228 0.230 1.53 2.84 2.91
Fusion 3 [10−3 /m] 0.203 0.210 0.205 1.32 2.01 1.94

information. Fusion 3, which uses the single track vehicle model of fusion 1, but
the road model of fusion 2, seems to position itself between those two.
Comparing the rural road results based only on those measurements where other
vehicles were tracked, we see an interesting pattern. The curvature estimate of
fusion 2 and fusion 3 is improved by the additional information, but the estimate
of fusion 1 is declined. The error values of the three fusion approaches are also
in the same range. The explanation of this behavior can be found by analyzing
the measurement sequences. If the leading vehicle is close-by, as for example in
Figure 8a, it helps improving the results. However, if the leading vehicle is more
distant, the curvature at this position might not be the same as it is at the ego vehi-
cle’s position, which leads to a degraded result. In Lundquist and Schön (2008a)
the authors presented preliminary results based on much shorter measurement
sequences, where the leading vehicles were more close-by and the estimate of
fusion 1 was improved by the existence of leading vehicles. The problem could
be solved by letting the measurement noise e of the measurement equation (51)
depend on the distance to the leading vehicle.
The highway is rather straight and as expected not much accuracy could be gained
in using an improved dynamic vehicle model. It is worth noticing that the olr’s
rural road rmse value is about 10 times higher than the highway value, but the
model’s rmse increases only about six times when comparing the rural road val-
ues with the highway. Comparing the rmse values in the columns marked pos-
sible; the rmse for fusion 1 also increases about six times, but that of fusion 2
increases as much as twelve times when comparing the highway measurements
with the rural road.
A common problem with these road estimation methods is that it is hard to dis-
tinguish between the case when the leading vehicle is entering a curve and the
6 Conclusions 157

case when the leading vehicle is performing a lane change. With the approach in
this paper the information about the ego vehicle motion, the olr and the leading
vehicles is weighted together in order to form an estimate of the road curvature.
The fusion approach in this paper produces an estimate of the lateral position
lTn of the leading vehicle which seems reasonable. The results are thoroughly
described in Lundquist and Schön (2008a).

6 Conclusions
In this contribution we have derived a method for joint ego-motion and road
geometry estimation. The presented sensor fusion approach combines the infor-
mation from sensors present in modern premium cars, such as radar, camera
and imu, with a dynamic model. This model, which consists of a new dynamic
motion model of the road, is the core of this contribution. The road geometry
is estimated by considering the information from the optical lane recognition of
the camera, the position of the leading vehicles, obtained by the radar and the
camera, and by making use of a dynamic ego vehicle motion model, which takes
imu-data and the steering wheel angle as input. If one of these three parts fails,
for example there might not be any leading vehicles or the lane markings are bad,
as in Figure 8b, then the sensor fusion framework will still deliver an estimate.
The presented sensor fusion framework has been evaluated together with two
other fusion approaches on real and relevant data from both highway and rural
roads in Sweden. The data consists of 78 min driving on various road conditions,
also including snow-covered pavement. The approach presented in this paper ob-
tained the best results in all situations, when compared to the other approaches,
but it is most prominent when driving alone on a rural road. If there are no
leading vehicles that can be used, the improved road and ego vehicle models still
supports the road geometry estimation and delivers a more accurate result.
158 Paper B Joint Ego-Motion and Road Geometry Estimation

Bibliography
B. D. O. Anderson and J. B. Moore. Optimal Filtering. Information and system
science series. Prentice Hall, Englewood Cliffs, NJ, USA, 1979.
Y. Bar-Shalom, X. Rong Li, and T. Kirubarajan. Estimation with Applications to
Tracking and Navigation. John Wiley & Sons, New York, NY, USA, 2001.
R. Behringer. Visuelle Erkennung und Interpretation des Fahrspurverlaufes
durch Rechnersehen für ein autonomes Straßenfahrzeug, volume 310 of
Fortschrittsberichte VDI, Reihe 12. VDI Verlag, Düsseldorf, Germany, 1997.
Also as: PhD Thesis, Universität der Bundeswehr, 1996.
F. Bengtsson and L. Danielsson. Designing a real time sensor data fusion system
with application to automotive safety. In Proceedings of the World Congress
on Intelligent Transportation Systems and Services, New York, USA, November
2008.
S. S. Blackman and R. Popoli. Design and Analysis of Modern Tracking Systems.
Artech House, Norwood, MA, USA, 1999.
E. D. Dickmanns. Dynamic Vision for Perception and Control of Motion.
Springer, London, UK, 2007.
E. D. Dickmanns and B. D. Mysliwetz. Recursive 3-D road and relative ego-state
recognition. IEEE Transactions on pattern analysis and machine intelligence,
14(2):199–213, February 1992.
E. D. Dickmanns and A. Zapp. A curvature-based scheme for improving road
vehicle guidance by computer vision. In Proceedings of the SPIE Conference
on Mobile Robots, volume 727, pages 161–198, Cambridge, MA, USA, 1986.
A. Eidehall. Tracking and threat assessment for automotive collision avoidance.
PhD thesis No 1066, Linköping Studies in Science and Technology, Linköping,
Sweden, January 2007.
A. Eidehall and F. Gustafsson. Obtaining reference road geometry parameters
from recorded sensor data. In Proceedings of the IEEE Intelligent Vehicles
Symposium, pages 256–260, Tokyo, Japan, June 2006.
A. Eidehall, J. Pohl, and F. Gustafsson. Joint road geometry estimation and vehicle
tracking. Control Engineering Practice, 15(12):1484–1494, December 2007.
A. Gern, U. Franke, and P. Levi. Advanced lane recognition - fusing vision and
radar. In Proceedings of the IEEE Intelligent Vehicles Symposium, pages 45–51,
Dearborn, MI, USA, October 2000.
A. Gern, U. Franke, and P. Levi. Robust vehicle tracking fusing radar and vision.
In Proceedings of the international conference of multisensor fusion and inte-
gration for intelligent systems, pages 323–328, Baden-Baden, Germany, August
2001.
Bibliography 159

N. J. Gordon, D. J. Salmond, and A. F. M. Smith. Novel approach to


nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings on Radar
and Signal Processing, 140(5):107–113, April 1993.
T. Gumpp, D. Nienhuser, R. Liebig, and J.M. Zollner. Recognition and tracking
of temporary lanes in motorway construction sites. In Proceedings of the IEEE
Intelligent Vehicles Symposium, pages 305–310, Xi’an, China, June 2009.
F. Gustafsson. Adaptive Filtering and Change Detection. John Wiley & Sons, New
York, USA, 2000.
H. Hahn. Rigid body dynamics of mechanisms. 1, Theoretical basis, volume 1.
Springer, Berlin, Germany, 2002.
U. Hofmann, A. Rieder, and E.D. Dickmanns. Ems-vision: application to hybrid
adaptive cruise control. In Proceedings of the IEEE Intelligent Vehicles Sym-
posium, pages 468–473, Dearborn, MI, USA, 2000.
U. Hofmann, A. Rieder, and E.D. Dickmanns. Radar and vision data fusion for
hybrid adaptive cruise control on highways. Machine Vision and Applications,
14(1):42–49, 2003.
A. H. Jazwinski. Stochastic processes and filtering theory. Mathematics in science
and engineering. Academic Press, New York, USA, 1970.
T. Kailath, A. H. Sayed, and B. Hassibi. Linear Estimation. Information and
System Sciences Series. Prentice Hall, Upper Saddle River, NJ, USA, 2000.
K. Kaliyaperumal, S. Lakshmanan, and K. Kluge. An algorithm for detecting
roads and obstacles in radar images. IEEE Transactions on Vehicular Technol-
ogy, 50(1):170–182, January 2001.
R. E. Kalman. A new approach to linear filtering and prediction problems. Trans-
actions of the ASME, Journal of Basic Engineering, 82:35–45, 1960.
U. Kiencke and L. Nielsen. Automotive Control Systems. Springer, Berlin, Hei-
delberg, Germany, 2 edition, 2005.
A. Kirchner and C. Ameling. Integrated obstacle and road tracking using a laser
scanner. In Proceedings of the IEEE Intelligent Vehicles Symposium, pages
675–681, Dearborn, MI, USA, October 2000.
A. Kirchner and T. Heinrich. Model based detection of road boundaries with
a laser scanner. In Proceedings of the IEEE Intelligent Vehicles Symposium,
pages 93–98, Stuttgart, Germany, October 1998.
S. Lakshmanan, K. Kaliyaperumal, and K. Kluge. Lexluther: an algorithm for
detecting roads and obstacles in radar images. In Proceedings of the IEEE
Conference on Intelligent Transportation Systems, pages 415–420, Boston, MA,
USA, November 1997.
B.B. Litkouhi, A.Y. Lee, and D.B. Craig. Estimator and controller design for lan-
etrak, a vision-based automatic vehicle steering system. In Proceedings of the
160 Paper B Joint Ego-Motion and Road Geometry Estimation

IEEE Conference on Decision and Control, volume 2, pages 1868–1873, San


Antonio, Texas, December 1993.
L. Ljung. System identification, Theory for the user. System sciences series. Pren-
tice Hall, Upper Saddle River, NJ, USA, 2 edition, 1999.
H. Loose, U. Franke, and C. Stiller. Kalman particle filter for lane recognition on
rural roads. In Proceedings of the IEEE Intelligent Vehicles Symposium, pages
60–65, Xi’an, China, June 2009.
C. Lundquist and T. B. Schön. Road geometry estimation and vehicle tracking
using a single track model. In Proceedings of the IEEE Intelligent Vehicles
Symposium, pages 144–149, Eindhoven, The Netherlands, June 2008a.
C. Lundquist and T. B. Schön. Road geometry estimation and vehicle tracking
using a single track model. Technical Report LiTH-ISY-R-2844, Department
of Electrical Engineering, Linköping University, Sweden, Linköping, Sweden,
March 2008b.
C. Lundquist and T. B. Schön. Recursive identification of cornering stiffness pa-
rameters for an enhanced single track model. In Proceedings of the IFAC Sym-
posium on System Identification, pages 1726–1731, Saint-Malo, France, July
2009.
C. Lundquist and T. B. Schön. Joint ego-motion and road geometry estimation.
Information Fusion, 12:253–263, October 2011.
C. Lundquist, U. Orguner, and T. B. Schön. Tracking stationary extended objects
for road mapping using radar measurements. In Proceedings of the IEEE In-
telligent Vehicles Symposium, pages 405–410, Xi’an, China, June 2009.
B. Ma, S. Lakshmanan, and A. O. Hero. Simultaneous detection of lane and pave-
ment boundaries using model-based multisensor fusion. IEEE Transactions on
Intelligent Transportation Systems, 1(3):135–147, September 2000.
J. C. McCall and M. M. Trivedi. Video-based lane estimation and tracking for
driver assistance: Servey, system, and evaluation. IEEE Transactions on Intel-
ligent Transportation Systems, 7(1):20–37, March 2006.
M. Mitschke and H. Wallentowitz. Dynamik der Kraftfahrzeuge. Springer, Berlin,
Heidelberg, 4 edition, 2004.
A. Muller, M. Manz, M. Himmelsbach, and H.J. Wunsche. A model-based object
following system. In Proceedings of the IEEE Intelligent Vehicles Symposium,
pages 242–249, Xi’an, China, June 2009.
M. Nikolova and A Hero. Segmentation of a road from a vehicle-mounted radar
and accuracy of the estimation. In Proceedings of the IEEE Intelligent Vehicles
Symposium, pages 284–289, Dearborn, MI, USA, October 2000.
Robert Bosch GmbH, editor. Automotive Handbook. SAE Society of Automotive
Engineers, 6 edition, 2004.
Bibliography 161

W. J. Rugh. Linear System Theory. Information and system sciences series. Pren-
tice Hall, Upper Saddle River, NJ, USA, 2 edition, 1996.
S. F. Schmidt. Application of state-space methods to navigation problems. Ad-
vances in Control Systems, 3:293–340, 1966.
T. B. Schön. Estimation of Nonlinear Dynamic Systems – Theory and Applica-
tions. PhD thesis No 998, Linköping Studies in Science and Technology, De-
partment of Electrical Engineering, Linköping University, Sweden, February
2006.
R. Schubert, G. Wanielik, and K. Schulze. An analysis of synergy effects in an
omnidirectional modular perception system. In Proceedings of the IEEE Intel-
ligent Vehicles Symposium, pages 54–59, Xi’an, China, June 2009.
G. L. Smith, S. F. Schmidt, and L. A. McGee. Application of statistical filter the-
ory to the optimal estimation of position and velocity on board a circumlunar
vehicle. Technical Report TR R-135, NASA, 1962.
J. Sparbert, K. Dietmayer, and D. Streller. Lane detection and street type clas-
sification using laser range images. In Proceedings of the IEEE Conference on
Intelligent Transportation Systems, pages 454–459, Oakland, CA, USA, August
2001.
B. O. S. Teixeira, J. Chandrasekar, L. A. B. Torres, L. A. Aguirre, and D. S. Bern-
stein. State estimation for equality-constrained linear systems. In Proceedings
of the IEEE Conference on Decision and Control, pages 6220–6225, New Or-
leans, LA, USA, December 2007.
A. Watanabe, T. Naito, and Y. Ninomiya. Lane detection with roadside structure
using on-board monocular camera. In Proceedings of the IEEE Intelligent Ve-
hicles Symposium, pages 191–196, Xi’an, China, June 2009.
A. Wedel, U. Franke, H. Badino, and D. Cremers. B-spline modeling of road sur-
faces for freespace estimation. In Proceedings of the IEEE Intelligent Vehicles
Symposium, pages 828–833, Eindhoven, The Netherlands, June 2008.
H. Weigel, P. Lindner, and G. Wanielik. Vehicle tracking with lane assignment by
camera and Lidar sensor fusion. In Proceedings of the IEEE Intelligent Vehicles
Symposium, pages 513–520, Xi’an, China, June 2009.
W. S. Wijesoma, K. R. S. Kodagoda, and A. P. Balasuriya. Road-boundary de-
tection and tracking using ladar sensing. IEEE Transactions on Robotics and
Automation, 20(3):456–464, June 2004.
J. Y. Wong. Theory Of Ground Vehicles. John Wiley & Sons, New York, USA, 3
edition, 2001.
Z. Zomotor and U. Franke. Sensor fusion for improved vision based lane recogni-
tion and object tracking with range-finders. In Proceedings of the IEEE Confer-
ence on Intelligent Transportation Systems, pages 595–600, Boston, MA, USA,
November 1997.
Paper C
Extended Target Tracking Using
Polynomials With Applications to
Road-Map Estimation

Authors: Christian Lundquist, Umut Orguner and Fredrik Gustafsson

Edited version of the paper:


C. Lundquist, U. Orguner, and F. Gustafsson. Extended target track-
ing using polynomials with applications to road-map estimation. IEEE
Transactions on Signal Processing, 59(1):15–26, January 2011c.

The paper presents data that was previously published in:


C. Lundquist, U. Orguner, and F Gustafsson. Estimating polynomial
structures from radar data. In Proceedings of the International Confer-
ence on Information Fusion, Edinburgh, UK, July 2010b.
C. Lundquist, U. Orguner, and T. B. Schön. Tracking stationary ex-
tended objects for road mapping using radar measurements. In Pro-
ceedings of the IEEE Intelligent Vehicles Symposium, pages 405–410,
Xi’an, China, June 2009.
Extended Target Tracking Using
Polynomials With Applications to Road-Map
Estimation

Christian Lundquist, Umut Orguner and Fredrik Gustafsson

Dept. of Electrical Engineering,


Linköping University,
SE–581 83 Linköping, Sweden
[email protected], [email protected], [email protected]

Abstract
This paper presents an extended target tracking framework which
uses polynomials in order to model extended objects in the scene of in-
terest from imagery sensor data. State space models are proposed for
the extended objects which enables the use of Kalman filters in track-
ing. Different methodologies of designing measurement equations are
investigated. A general target tracking algorithm that utilizes a spe-
cific data association method for the extended targets is presented.
The overall algorithm must always use some form of prior informa-
tion in order to detect and initialize extended tracks from the point
tracks in the scene. This aspect of the problem is illustrated on a real
life example of road-map estimation from automotive radar reports
along with the results of the study.

1 Introduction
This contribution relates to tracking applications where a sensor platform ob-
serves objects in local coordinates (x,y,z). We focus on the case of a moving sen-
sor platform and stationary objects, but the framework can be generalized to any
target tracking scenario. Some of these detected objects are points, while others
are extended objects. Tracking point shaped objects have been the main focus in
the tracking literature for a long time Blackman and Popoli (1999); Bar-Shalom
and Li (1995). In particular in the computer vision literature, tracking extended
objects has become popular during the last decades. The literature models ex-
tended objects as closed contours, like simple geometrical shapes as rectangles
and ellipses, or arbitrary shapes as splines Bartels et al. (1987) and active con-
tours Blake and Isard (1998). We propose to complement these models of ex-
tended objects with detection and data association methods for sensors providing
point measurements. Letting x be the longitudinal, y the lateral and z the vertical

165
166 Paper C Extended Target Tracking Using Polynomials

direction, possible applications include the following three automotive and two
naval examples:
• Road curvature models y = f (x; θ) based on lane markers detected by a
vision sensor. Such models are needed in all future collision avoidance sys-
tems, in particular lane assist systems Gustafsson (2009).
• Similar extended objects along the road side y = f (x; θ) based on radar
mainly. Such a model is a useful complement to the vision based road
model above, or for autonomous vehicles driving in unstructured environ-
ments. This is our application example in Section 6.
• Road inclination models z = f (x; θ) based on a combination of vision and
radar with good azimuth resolution. This can be used in look-ahead control
Hellström (2010) for fuel efficient cruise controllers.
• Shore models x = f (y; θ) based on a radar to describe the shape of islands
and parts of the coastline. This can be used in robust navigation systems
based on map matching, see for instance the extended approach in Karlsson
and Gustafsson (2006); Svensson (2009).
• Vertical shape models z = f (y; θ) based on a camera describing the altitude
variations, for instance, of an island as seen from the sensor platform. Ap-
plications are similar to the previous point.
The common theme in these examples is that the model is parameterized in a
given model class (we will study polynomials), and that both the input and out-
put are observed variables, leading to an errors in variables (eiv) Söderström
(2007) problem. This is, to the best of the authors knowledge, an original ap-
proach to describe extended objects. We derive a modified measurement up-
date for the eiv model, and demonstrate its increased accuracy compared to the
straightforward errors in output (eio) approximation for filtering applications.
The detection procedure requires some form of prior information on the shape
of the extended object because many different models can suit the same set of
measurements.
This paper is outlined as follows. We begin with an overview of the related liter-
ature in Section 2. Connections to several different fields of research are pointed
out and this makes this section fairly long. However, the subsequent sections are
organized such that the uninterested reader can skip this section without causing
a loss of understanding for the the rest of the paper. A basic problem formula-
tion of point and extended target tracking using polynomials as extended objects
is given in Section 3. The state space representations that we are going to use
in extended target tracking are introduced in Section 4. Our general algorithm
for tracking point and extended targets is summarized in Section 5. Section 6
introduces the practical application. We apply the ideas presented in earlier sec-
tions to the road map estimation problem utilizing the automotive radar reports.
The experiments done and the results obtained are presented in Section 6.3. The
paper is concluded in Section 7.
2 Related Literature 167

2 Related Literature
Extended target tracking has been studied in the literature for some years and
we give a brief overview in Section 2.1. Section 2.2 summarizes a couple of esti-
mation methods proposed earlier for curves and contours. One of our proposed
solutions is stated to be related to an estimation framework called as errors in
variables problem and some references for this are given in Section 2.3. Finally,
different solutions related to our motivating application example of road edge
estimation are discussed in Section 2.4.

2.1 Extended Target Tracking


A target is denoted extended whenever the target extent is larger than the sensor
resolution, i.e., it is large enough to give rise to more than one measurement per
time step. Common methods used to track extended objects are very similar to
the ones used for tracking a group of targets moving in formation, see e.g., Ristic
et al. (2004). The bibliography Waxman and Drummond (2004) provides a com-
prehensive overview of existing literature in the area of group and cluster track-
ing. One conventional method is to model the target as a set of point features in
a target reference frame, each of which may contribute at most one sensor mea-
surement Mahler (2007). The exact location of a feature in the target reference
frame is often assumed uncertain. The motion of an extended target is modeled
through the process model in terms of the translation and rotation of the target
reference frame relative to a world coordinate frame, see e.g., Dezert (1998). As is
the case most of the time, not all measurements arise from features belonging to
targets and some are due to false detections (clutter). The association hypotheses
are derived through some data association algorithm. In Vermaak et al. (2005)
a method is proposed where the association hypotheses are included in the state
vector and the output of the tracking filter is a joint posterior density function of
the state vector and the association hypotheses.
Instead of modeling the target as a number of point features, the target may be
represented by a spatial probability distribution. It is more likely that a measure-
ment comes from a region of high spatial density than from a sparse region. In
Gilholm and Salmond (2005); Gilholm et al. (2005) it is assumed that the num-
ber of received target and clutter measurements are Poisson distributed, hence
several measurements may originate from the same target. Each target related
measurement is an independent sample from the spatial distribution. The spatial
distribution is preferable where the point source models are poor representations
of reality, that is, in the cases where the measurement generation is diffuse. In
Boers et al. (2006) a similar approach is presented, but since raw data is consid-
ered, no data association hypotheses are needed.
In many papers dealing with the shape of a target it is assumed that the sensor
is also able to measure one or more dimensions of the target’s extent. A high-
resolution radar sensor may provide measurements of a targets down-range ex-
tent, i.e., the extension of the objects along the line-of-sight. The information of
the target’s extent is incorporated in the tracking filter aiding the tracking pro-
168 Paper C Extended Target Tracking Using Polynomials

cess to maintain track on the target when it is close to other objects. An elliptical
target model, to represent an extended target or a group of targets, is proposed
in Drummond et al. (1990). The idea is improved in Salmond and Parr (2003)
with a solution based on ekf, in Ristic and Salmond (2004) based on ukf and
in Angelova and Mihaylova (2008) with a solution based on a Monte Carlo (mc)
algorithm.

2.2 Contours and Curves


Modeling of extended targets in this work is very similar to active contours Blake
and Isard (1998) and snakes Kass et al. (1988), which model the outline of an
object based on 2 dimensional image data, studied extensively in computer vi-
sion. The active contour models are also polynomials represented by B-splines.
Moreover, the probabilistic processing of active contours from image data pio-
neered mainly by Blake and Isard Blake and Isard (1998) uses similar estimation
framework like Kalman or particle filters. The underlying low-level processing
involved, on the other hand, assumes reasonably the existence of image data from
which features (like Harris corners) or feature maps (like Harris measures etc.)
can be extracted. The so-called Condensation algorithm Isard and Blake (1998),
for example, searches for suitable sets of features in the image data (or in the
feature map) iteratively for each of its different hypotheses (particles) in the cal-
culation of the likelihoods. In this respect, the active contour framework would
be an important candidate for doing tracking with the raw sensor data without
any thresholding which is named as “track-before-detect” in the target tracking
literature. The approach presented in this work carries the distinction of working
only with a single set of features provided most of the time by thresholding of the
raw sensor data (like conventional target tracking) and hence is mainly aimed at
applications where the users (of the sensors) either are unable to reach or do not
have the computation power to work with the raw sensor data.
The mapping of the boundaries of complex objects is also achieved in Lazarus
et al. (2010) using splinegon techniques and a range sensor such as e.g., laser
mounted on moving vehicles.

2.3 Errors In Variables


The use of “noisy” measurements as model parameters in this work makes this
paper directly related to the errors in variables framework, where some of the
independent variables are contaminated by noise. Such a case is common in the
field of system identification Ljung (1999) when not only the system outputs, but
also the inputs are measured imperfectly. Examples of such eiv representations
can be found in Söderström (2007) and a representative solution is proposed
in Guidorzi et al. (2003); Diversi et al. (2003). The Kalman filter cannot be di-
rectly applied to such eiv processes as discussed in Guidorzi et al. (2003),Roorda
and Heij (1995). Nevertheless, an extension of the Kalman filter, where the state
and the output are optimally estimated in the presence of state and output noise
is proposed in Diversi et al. (2005). The eiv problem is also closely related to the
total least squares methodology, which is well described in the papers van Huf-
3 Problem Formulation 169

fel and Zha (1993); Boley and Sutherland (1993); De Moor (1993), in the survey
paper Markovsky and van Huffel (2007) and in the textbook Björck (1996).

2.4 Application
An important part of this contribution is the application of the ideas to the real
world example of road map estimation using automotive radar sensor reports.
In this context, the point and extended targets existing in the scene would repre-
sent e.g., lamp posts and guardrails along the road, respectively. Our earlier work
Lundquist et al. (2009) contains the findings from similar tracking scenarios. This
also connects our work to the approaches presented in the literature for the prob-
lem of road edge estimation. The third order approximation of the two sided
(left and right) “clothoid model” has been used in connection with Kalman fil-
ters in Kirchner and Heinrich (1998) and Polychronopoulos et al. (2004) for laser
scanner measurements and radar measurements, respectively. In Lundquist and
Schön (2009) two road edge models are proposed, one of which is very similar
to the model proposed in Polychronopoulos et al. (2004), and used a constrained
quadratic program to solve for the parameters. A linear model represented by its
midpoint and orientation (one for each side of the road) is utilized by Wijesoma
et al. (2004) with lidar sensing for tracking road-curbs. Later, the results of Wi-
jesoma et al. (2004) were enhanced in Kodagoda et al. (2006) with the addition
of image sensors. A similar extended Kalman filtering based solution is given
in Fardi et al. (2003), where a circular road edge modeling framework is used.
Recently, the particle filters (also referred to as condensation in image and video
processing) have been applied to the road edge estimation problem in Wang et al.
(2008) with an hyperbolic road model.

3 Problem Formulation
The examples mentioned in Section 1 all involve 2-dimensional extended objects,
modeled by any configuration of two local coordinates (x, y or z). In the following
parts of the paper, we describe all models in the x-y-plane without any loss of
generality, i.e., the coordinates are easily exchanged to describe objects in any
other space.
Suppose we are given the sensor measurements in batches of Cartesian x and y
coordinates as follows:

(i)
h iT Nzk
zk , x(i) y(i) (1)
k i=1

for discrete time instants k = 1, . . . , K. In many cases in reality (e.g., radar, laser
and stereo vision) and in the practical application considered in this work, the
sensor provides range r and azimuth angle ψ given as

(i)
h i T  Nz k
z̄k , r (i) ψ (i) . (2)
k i=1

In such a case we assume that some suitable standard polar to Cartesian conver-
170 Paper C Extended Target Tracking Using Polynomials

sion algorithm is used to convert these measurements into the form (1).

The measurements z are noisy point measurements originating from one of the
following sources

• Point sources with state vector


h iT
xP , x y , (3)
which represents the true Cartesian position of the source.

• Extended sources which are represented by an nth order polynomial given


as
y = a0 + a1 x + a2 x2 + . . . + an xn (4)
h iT
in the range [xstart , xend ] where xa , a0 a1 · · · an are the polynomial
h iT
coefficients and x y are planar Cartesian coordinates. Note that the co-
ordinate y is a function of x and that the direction of the coordinate frame
is chosen dependent on the application in mind. The state vector is defined
as
h iT
xL , xa xstart xend . (5)

• False detections

The state models considered in this contribution are described, in general, by the
state space equations
xk+1 = f (xk , uk ) + wk , (6a)
yk = h(xk , uk ) + ek , (6b)
where x, u and y denotes the state, the input signal, and the output signal, while
w ∼ N (0, Q) and e ∼ N (0, R) are the process and measurement noise, respec-
tively. The use of an input signal is explained in Section 4.1. The time index is
denoted with k.

One of the main contributions of our work is the specific state space represen-
tation we propose for extended targets which is presented in Section 4. A poly-
nomial is generally difficult to handle in a filter, since the noisy measurements
are distributed arbitrarily along the polynomial. In this respect, the measure-
ment model (6b) contains parts of the actual measurement vector as parameters.
For the sake of simplicity, the tracked objects are assumed stationary, resulting
in very simple motion models (6a). However, the motion or process model may
easily be substituted and chosen arbitrarily to best fit its purpose.
(i) P N
The aim is to obtain posterior estimates of the point sources {xP,k|k }i=1 and the
k
(i) Nz
 
(i) NL
extended sources {xL,k|k }i=1 given all the measurements {z` }i=1` recursively
`=1
in time.
4 State Space Representation for an Extended Object 171

4 State Space Representation for an Extended Object

This section first investigates the measurement model to be used with extended
sources in Section 4.1 and then makes short remarks about the state dynamics
equation for polynomials in Section 4.2.

4.1 Measurement Model

This section describes how a point measurement z relates to the state xL of an ex-
tended object. For this purpose, we derive a measurement model in the form (6b),
which describes the relation between the state variables xL , defined in (5), and
output signals y and input signals u. Notice that, for the sake of simplicity, we
also drop the subscripts k specifying the time stamps of the quantities.

The general convention in modeling is to make the definitions


y , z, u , ∅, (7)
where ∅ denotes the empty set meaning that there is no input. In this setting, it
is extremely difficult, if not impossible, to find a measurement model connecting
the outputs y to the states xL in the form of (6b). Therefore, we are forced to use
other selections for y and u. Here, we make the selection
y , y, u , x. (8)
Although being quite a simple selection, this choice results in a rather convenient
linear measurement model in the state partition xa ,
y = Ha (u)xa + ea , (9)
h iT
where Ha (u) = 1 x x2 · · · xn . It is the selection in (8) rather than (7) that
allows us to use the standard methods in target tracking with clever modifica-
tions. Such a selection as (8) is also in accordance with the errors in variables
representations where measurement noise are present in both the outputs and
inputs, i.e., the observation z can be partitioned according to
" #
u
z= . (10)
y
We express the measurement vector given in (1) in terms of a noise free variable
z0 which is corrupted by additive measurement noise z̃ according to
z = z0 + z̃, z̃ ∼ N (0, Σc ), (11)
where the covariance Σc can be decomposed as
" #
Σx Σxy
Σc = . (12)
Σxy Σy
Note that, in the case the sensor provides measurements only in polar coordi-
nates (2), one has to convert both the measurement z̄ and the measurement noise
172 Paper C Extended Target Tracking Using Polynomials

covariance
Σp = diag(σr2 , σψ2 ) (13)
into Cartesian coordinates, which is a rather standard procedure. Note that, in
such a case, the resulting Cartesian measurement covariance Σc is, in general, not
necessarily diagonal and hence Σxy of (12) might be non-zero.

Since the model (9) is conditionally linear given the measurements, the Kalman
filter measurement update formulas can be used to incorporate the information
in z into the extended source state xL . An important question in this regard is
what would be the measurement covariance of the measurement noise term ea in
(9). This problem can be tackled in two ways.

Errors in Output (EIO) scheme

Although the input terms defined in (8) are measured quantities (and hence af-
fected by the measurement noise), and therefore the model parameters Ha (u) are
uncertain, in a range of practical applications where parameters are obtained
from measurements, such errors are neglected. Thus, the first scheme we present
here neglects all the errors in Ha (u). In this case, it can easily be seen that
ea = ỹ (14)
and therefore the covariance Σa of ea is
Σa = Σy . (15)
This type of approach was also used in our previous work Lundquist et al. (2009)
which presented earlier versions of the findings in this paper.

Errors in Variables (EIV) scheme

Neglecting the errors in the model parameters Ha (u) can cause overconfidence in
the estimates of recursive filters and can actually make data association difficult
in tracking applications (by causing too small gates). We, in this second scheme,
use a simple methodology to take the uncertainties in Ha (u) into account in line
with eiv framework. Assuming that the elements of the noise free quantity z0
satisfy the polynomial equation exactly, we get
y − ỹ = Ha (u − ũ)xa , (16a)
h i
y − ỹ = 1 x − x̃ (x − x̃)2 ··· (x − x̃)n xa , (16b)
which can be approximated using a first order Taylor expansion resulting in
y ≈ Ha (u)xa − Hea (u)x̃xa + ỹ (17a)
" #

= Ha (u)xa + h̃a (xa , u) , (17b)

with
h i
Ha (u) = 1 x x2 ··· xn , (17c)
4 State Space Representation for an Extended Object 173

h i
Hea (u) = 0 1 2x · · · nxn−1 , (17d)
h i
h̃a (xa , u) = −a1 − 2a2 x − · · · − nan xn−1 1 . (17e)
Hence, the noise term ea of (9) is given by
" #

ea = ỹ − Ha x̃xa = h̃a (xa , u)
e (18)

and its covariance is given by
Σa = E(ea eTa ) = Σy + H
ea xa Σx xTa H
eaT − 2H
ea xa Σxy
= h̃a (xa , u)Σc h̃Ta (xa , u). (19)
Note that the eiv covariance Σa depends on the state variable xa .

Example

An example is used to compare the performances of the eio and the eiv schemes.
A second order polynomial with the true states
h iT h iT
xa = a0 a1 a2 = −20 −0.5 0.008 (20)
is used and 100 uniformly distributed measurements are extracted in the range
x = [0, 200]. The measurements are given on polar form as in (2) and zero mean
Gaussian measurement noise with covariance as in (13) is added using the param-
eters
Sensor 1: σr1 = 0.5, σψ1 = 0.05, (21a)
Sensor 2: σr2 = 10, σψ2 = 0.05, (21b)
Sensor 3: σr3 = 10, σψ3 = 0.005, (21c)
to simulate different type of sensors. Sensor 1 represents a sensor with better
range than bearing accuracy, whereas the opposite holds for sensor 3. Sensor
2 has about the same range and bearing accuracies. The true polynomial and
the measurements are shown in Figure 1. The measurements are transformed
into Cartesian coordinates. The following batch methods (Nzk = 100, K = 1) are
applied to estimate the states

• Least squares (ls eio) estimator,

• Weighted least squares (wls eio) with eio covariance,

• Weighted least squares (wls eiv) with eiv covariance. The state xa used
in (19) is estimated through a least squares solution in advance.

Furthermore, the states are estimated recursively (Nzk = 1, K = 100) using

• Kalman filter (kf eio) with eio covariance,

• Kalman filter (kf eiv) with eiv covariance. The predicted state estimate
x̂a,k|k−1 is used in (19) to derive Ra,k .
174 Paper C Extended Target Tracking Using Polynomials

200

150

100

50

0 sensor

sensor 1
−50 sensor 2
sensor 3

−50 0 50 100 150 200 250

Figure 1: The true polynomial and the extracted measurements are shown.
Covariance ellipses are drawn for a few measurements to show the direction
of the uncertainties.

• Unscented Kalman filter (ukf eiv) with sigma points derived by augment-
ing the state vector with the noise terms
h iT
z̃ = ũ ỹ , z̃ ∼ N (0, Σc ), (22)
to consider the error in all directions (as in the eiv case), and to deal with
the nonlinear transformations of the noise terms.
The covariance of the process noise is set to zero, i.e., Q = 0, since the target is
not moving. The initial conditions of the state estimate are selected as
h iT
x̂a,0 = 0 0 0 , (23a)
2
σ2
!
Pa,0 = a diag(xa ) , (23b)
κ
where κ = 3 and σa2 = 8. Note that every estimate’s initial uncertainty is set to be
a scaled version of its true value in (23b).
The rmse values for 1000 Monte Carlo (mc) simulations are given in Table 1.
The rmse of the eiv schemes is clearly lower than the other methods, especially
for sensor 1 and 3 with non-symmetric measurement noise. This result justifies
the extra computations required for calculating the eiv covariance. There is only
a small difference in the performance between the kf eiv and the ukf eiv for
the second order polynomial. This is a clear indication of that the simple Taylor
series approximation used in deriving the eiv covariance is accurate enough.
4 State Space Representation for an Extended Object 175

Table 1: rmse values for the extended object in Figure 1. The sensor config-
urations are defined in (21)

Sensor ls wls wls kf kf ukf


eio eio eiv eio eiv eiv
a0 5.10 0.55 0.45 0.55 0.48 0.49
1 a1 0.18 0.034 0.024 0.034 0.029 0.029
a2 · 10−3 1.06 0.29 0.24 0.29 0.31 0.32
a0 4.90 3.54 3.35 2.90 2.44 2.36
2 a1 0.20 0.11 0.099 0.10 0.11 0.10
a2 · 10−3 1.21 0.77 0.66 0.78 0.83 0.80
a0 2.53 30.51 3.51 30.47 4.81 4.35
3 a1 0.068 0.45 0.072 0.45 0.12 0.12
a2 · 10−3 0.39 1.27 0.40 1.26 0.62 0.60

1 Remark (Robustness). We use only Gaussian noise representations in this paper which
would directly connect our work to the (implicit) minimization of ls-type cost functions
which are known to be non-robust against outliers. The algorithms we are going to pro-
pose for estimation are going to be protected, to some extent, against this by classical
techniques like gating Blackman and Popoli (1999); Bar-Shalom and Li (1995). Extra ro-
bustness features can be accommodated by the use of Huber Huber and Ronchetti (2009)
and `1 norms etc. in the estimation which were considered to be outside the scope of this
paper.

Complete Measurement Model for an Extended Object


Up to this point, we have only considered how the observation z relates to the
state component xa . It remains to discuss the relation between the observation
and the start xstart and the end points xend of the polynomial. The measurement
information must only be used to update these components of the state if the new
observations of the extended source lie outside the range of the polynomial. We
can define the following (measurement dependent) measurement matrix for this
purpose:
h i


 1 0 if x ≤ xstart,k|k−1
h
 i
Hse = 
 0 1 if x ≥ xend,k|k−1 (24)
h i
 0 0 otherwise.

The complete measurement model of an extended object can now be summarized


by
z = HL xL + e, e ∼ N (0, RL (xL )), (25a)
with
" #
01×n Hse
HL = , (25b)
Ha 01×2
RL (xL ) = blkdiag(Σx , Σa (xL )). (25c)
176 Paper C Extended Target Tracking Using Polynomials

Put in words, if the x-component of a new measurement is closer to the sensor


than the start point of the line xstart it is considered in the measurement equa-
tion (24) and can used to update this state variable. Analogously, if a new mea-
surement is more distant than the end point of the line xend it is considered in (24).
Further, if a measurements is in between the start and end point of the line, the
measurement model is zero in (24) and there is no relation between this measure-
ment and the state variables xstart or xend .

4.2 Process Model


Any model as e.g., the standard constant velocity or coordinated turn model may
be used for the targets. For simplicity it is assumed that the targets are stationary
in this contribution, thus the process model on the form (6a) is linear and may be
written
xk+1 = Fxk + wk . (26)
To increase the flexibility of the extended object an assumption about the dy-
namic behavior of its size is made. The size of the extended object is modeled to
shrink with a factor 0.9 < λ < 1 according to
xstart,k+1 = xstart,k + λ(xend,k − xstart,k ), (27a)
xend,k+1 = xend,k − λ(xend,k − xstart,k ), (27b)
leading to the following process model for the polynomial
 n×n
0n×2

I 
FL =  2×n 1 − λ λ  . (28)
 
0
λ 1−λ

This shrinking behavior for the polynomials allows for automatic adjustment of
the start and end points of the polynomials according to the incoming measure-
ments.
Some dynamics is included into the process model of the polynomials for the
start and the end point components of the state, xstart and xend .

5 Multi-Target Tracking Algorithm


In this section, we describe the target tracking algorithm we use to track poly-
nomial shaped extended objects and point objects at the same time. Suppose
at time k − 1, we have NP,k−1 estimated point objects whose states and covari-
(i) (i) N
P,k−1
ances are given by {x̂P,k−1|k−1 , PP,k−1|k−1 }i=1 and NL,k−1 estimated extended ob-
(j) (j) N L,k−1
jects whose states and covariances are given by {x̂L,k−1|k−1 , PL,k−1|k−1 }j=1 . Basi-
cally, the task of the tracking algorithm is to obtain the updated states and the
(i) (i) NP,k (j) (j) NL,k
covariances {x̂P,k|k , PP,k|k }i=1 , {x̂L,k|k , PL,k|k }j=1 when the set of Nzk measurements
(m) Nzk
given by {zk }m=1 is obtained. Notice that the number of these updated summary
statistics kept in the algorithm can change with appearance or disappearance of
5 Multi-Target Tracking Algorithm 177

the objects in the environment. Hence, the number of each type of objects must
also be estimated by the tracking algorithm.

We, in this work, consider a conventional multi-target tracking algorithm that


uses a specific track initiation and deletion logic for handling the variable number
of objects to track. Each cycle of estimation starts with the classical (association)
hypotheses reduction technique “gating” which is followed by a data association
process which determines the measurement to (point or extended) track associa-
tions. When the measurement to track associations are known, the current tracks
are updated with their associated measurement. In this measurement update
step, the results of Section 4.1 are used for updating the extended object states
and covariances. After the updates are completed one has to do track handling.
This operation deals with

• Extended object generations from established point targets. This part of


the tracking algorithm must depend heavily upon the problem under in-
vestigation and hence the prior information that one has about the possible
extended objects because the mapping from a number of points to a number
of polynomials is not unique.

• Track deletion. This operation deletes the old tracks that have not been
associated to any measurements for a significant amount of time.

Below, in separate subsections, we investigate the steps of the tracking algorithm


we propose in more detail.

5.1 Gating and Data Association


(m)
Each of the Nzk observations zk , m = 1, . . . , Nzk from the sensor measurements
(i) (j)
can be associated to one of the existing point tracks xP , the extended tracks xL
or a new point track is initiated. The number of association events (hypothe-
ses) is extremely large. The classical technique to reduce the number of these
hypotheses is called gating, see e.g., Bar-Shalom and Fortmann (1988). In this
section we show how to apply gating and to make a nearest-neighbor type data
association based on likelihood ratio tests. Other more complicated data associ-
ation methods like multiple hypothesis tracking, according to e.g., Reid (1979),
or joint probabilistic data association, as described by e.g., Bar-Shalom and Fort-
mann (1988), can also be used in this framework with appropriate modifications
for the extended sources. However, these are quite complicated and computation-
ally costly approaches and the nearest neighbor type algorithm has been found to
give sufficiently good performance for our application. The gating and the data
association are performed according to the following calculations. The likelihood
`m,i that the observation z(m) corresponds to the i th point track P(i) is given by

(m) (i) (i) (m) (i)
N (zk ; ẑP,k|k−1 , SP,k|k−1 ), if zk ∈ GP


`m,i =  (29)
0,
 otherwise
178 Paper C Extended Target Tracking Using Polynomials

(i)
where ẑP,k|k−1 is the predicted measurement of the point Pi according to the
(i)
model (39) and SP,k|k−1 is its covariance (innovation covariance) in the Kalman
(i)
filter. The gate GP is defined as the region
(  T  −1   )
(i) (i) (i) (i)
GP , z z − ẑP,k|k−1 SP,k|k−1 z − ẑP,k|k−1 ≤ δP (30)

where δP is the gating threshold.


The likelihood `m,j that the observation m belongs to the j th line is given by
  
L,(m) (j) (j) L,(m)
; L,k|k−1 L,k|k−1 , if zk


 N y k ŷ , S ∈ GLj
`m,j =  (31)


0,
 otherwise
(j) (j)
where ŷL,k|k−1 is the predicted output of the measurement model (9) and SL,k|k−1 is
(j)
its covariance, both for the state estimates x̂L . The quantity zL,(m) is representing
the original measurement transformed into the line’s corresponding coordinate
L,(m)
frame and yk is the y component of it. The gate GLj is defined as
 2
 " # y − ŷ(j) 

 u L,k|k−1 (j) (j)


GLj ,  z= ≤ δ , x − δ < u < x + δ . (32)
 
L start,k|k−1 s end,k|k−1 e

 y (j)
SL,k|k−1


and the innovation covariance is given by


(j) (j) (j)
SL,k|k−1 = Ha (u)Pa,k|k−1 Ha (u)T + Σa,k|k−1 , (33)
(j) (j)
where Pa,k|k−1 is the state covariance of xa,k|k−1 .

Having calculated the likelihood values, two matrices of likelihood values are
formed, one matrix ΓP ∈ RNz ×NP with the combinations of observations and points,
according to (29), and one matrix ΓL ∈ RNz ×NL with the combinations of observa-
tions and lines, according to (31).
The association procedure using ΓP and ΓL must be quite unconventional because
in conventional multi-target tracking, one of the well-known assumptions is that
at most a single measurement can originate from a target. However, obviously,
in our problem polynomials can result in multiple measurements. On the other
hand, we still have the assumptions that no two measurements may originate
from the same point source and no two sources may give rise to the same mea-
surements.
Our association mechanism which uses nearest neighbor type ideas with likeli-
hood ratio tests is summarized as follows: First find the the maximum value of ΓP
and call the corresponding point state imax and measurement mmax . Then, find
the maximum value of the mth row, corresponding to measurement mmax of ma-
trix ΓL and call the corresponding polynomial state jmax . The likelihood ratio
5 Multi-Target Tracking Algorithm 179

denoted by Λ(z(m) ) is now given by


p
(m)
`m,imax
Λ(z ), , (34)
`m,jmax
where we take the square root of the point likelihood for unit matching, since the
unit of (29) is (distance)−2 and the unit of (31) is (distance)−1 . The corresponding
likelihood ratio test is
H0
Λ(z(m) ) ≷ η (35)
H1

where H0 and H1 correspond to hypotheses that the measurement z(m) is associ-


(i ) (j )
ated to the point xP max and to the line xL max , respectively. The threshold η is to
be selected experimentally. More theory about likelihood ratio tests is given by
e.g., van Trees (1968). When this test is performed, if the measurement z(m) is as-
sociated to a point source Pi , then the values in the mth row of the two matrices as
well as the i th column of the point likelihood matrix must be set to zero to exclude
the measurement and the point source from further association. However, if z(m)
is associated to line Lj , then only the values in the mth rows of the two matrices
are set to zero because the line Lj can still be associated to other measurements.
The procedure is repeated until all measurements with non-zero likelihood have
been associated to either a point or a line. A new point track is to be initiated if
the observations could not be associated to an existing state. This is true when a
measurement is not in the gate of a non-associated point or an extended source.

5.2 Track Handling


Initially, our algorithm tries to generate point tracks from all incoming measure-
ments, using any standard initialization logic. When the point tracks are estab-
lished, at the end of each estimation cycle, the point track positions are examined
and decisions are made on the existence of any extended sources. This process
must be quite application dependent and it requires prior information on the pos-
sible forms of the polynomials to be extracted. For the purpose of giving some
examples of such prior information, we go back to the five motivating application
examples in Section 1 in the same order and list possible available prior informa-
tion in each case.
• The model class of road curvatures is determined in road construction stan-
dards, and prior values may be based on the ego vehicle’s motion.
• Extended objects along the road side may be based on prior knowledge of
the road curvature. An example application with such details is presented
in Section 6.
• The class of road inclination models are based on road construction stan-
dards and initial values may be based on maps or gps data, to be improved
in the tracking filter.
• A prior for the shore model is given by digital sea charts.
180 Paper C Extended Target Tracking Using Polynomials

• The altitude of an island is initiated roughly based on the topographic


information of a sea chart or as a standard model for islands in a given
archipelago.
If some extended sources are detected in the detection process, states and co-
variances for this extended sources are calculated from the corresponding point
tracks which are removed from the point track list after this procedure. The
created extended source states and covariances, from then on, are treated as ex-
tended tracks.
When the environment changes some tracks might expire and get no measure-
ments. When this happens, such tracks (point or extended) must be removed
from the track list. For this purpose, for each track we keep a counter that holds
the number of time instants that the corresponding track has not been associated
to any measurement. When this counter exceeds a threshold, the corresponding
track is removed from the track list.

6 Application Example and Use of Prior Information


As an application of the ideas presented, we consider the road map estimation
problem. A good road map is important for collision avoidance systems, path
planing and trajectory control. Measurements from an automotive radar mounted
on a moving vehicle which we refer to as the ego vehicle are used in this example
which makes it possible to map the road and its surrounding infrastructure even
if the line markings are bad or missing. We consider that we have the 2D world
coordinate frame shown as W whose origin is denoted by OW . The state vector
of the vehicle at least consists in the position and heading angle, i.e.,
h iT
xE , x WEW yWEW ψE (36)
with respect to the world coordinate frame W .
In our scenario, radar echoes typically stem from delineators or guardrails, which
we would like to track as points or polynomials, respectively, with the framework
presented in Section 5. We use the polynomial model (4) with n = 2 for our ex-
tended sources. This model has also been used for sequentially estimating similar
parameters for the road’s white lane markings in Lundquist and Schön (2010).
In the formulations of the previous sections, the extended sources are modeled as
polynomials in the common x-y (world) coordinates of the problem. This can be
considered as a significant limitation for the applications because in real scenes
the curves to be tracked might not satisfy such polynomial equations in the com-
mon x-y coordinates of the problem. Here, this restriction is overcome by equip-
ping each detected extended target with its own Cartesian x-y coordinates. Each
(j)
extended object with state xL has an associated coordinate frame denoted by L(j) .
The position of the origin and the orientation of this coordinate frame is given as
(j) h iT
ξL , x W (j)
L W
yW(j)
L W
ψL(j) (37)
6 Application Example and Use of Prior Information 181

with respect to the world reference frame W where the superscripts are sup-
(j)
pressed in the vector ξL for simplicity.
The measurements from the sensor mounted on the vehicle are obtained in polar
coordinates (as in (2)) and need to be transformed first into the Cartesian coor-
dinates of the sensor’s frame Es and then into the extended source’s coordinate
frame in order to be used. The measurement, expressed in the extended sources
coordinate frame L in Cartesian coordinates is given by
zL = T LEs
(zEs , xE , ξL ), (38)
where the transformation T LEsis described in the Appendix. Note that the mea-
surement zL , which fits into the framework discussed in Section 4.1, is now not
only affected by the original measurement zEs , but also by the position of the ego
vehicle, given by the state variable xE , and the origin and orientation of the lanes
coordinate frame ξL . We refer to the Appendix for details about the covariance
calculation for zL involving the transformation (38).

6.1 State Space Model


The state space model of the extended objects was thoroughly described in Sec-
tion 4 and is not repeated here. The process model for the points on the form (6a)
is linear and since the points are stationary, the process matrix FP is the identity
matrix. The polar measurements (2) are related to the Cartesian point states (3),
W T , through the measurement model according to
h i
i.e., xP = xW
PW y PW
q 
 xW − xW 2 + yW − yW 2 
 PW Es W PW Es W 
z̄k =   + ek , (39)

y W
PW
 arctan −ψ Es

xW
PW
h iT
where xW Es W yW
Es W and ψEs are the mounting position and orientation of the
sensor. The measurement noise is assumed Gaussian e ∼ N (0, Σp ).

6.2 Using Prior Information in Extended Track Generation


An extended track is initiated from tracked points under the assumption that a
number of points form a line parallel to the road. In this section we are returning
to the track handling discussion started in Section 5.2, in order to make the ini-
tialization procedure for extended sources more concrete. A prior information of
the road’s shape, or more specifically the lane’s shape, in the vehicle coordinate
frame is used for generating extended tracks from point tracks. The prior infor-
mation is obtained by a lane tracking filter that estimates the parameters of the
following lane polynomial
c
yE = lE + ψRE xE + 0 (xE )2 , (40)
2
where xE and yE are expressed in the ego vehicle’s coordinate frame E. The lane
tracker we use is described in Lundquist and Schön (2010) and it is based on a ego
vehicle motion model which estimates the lane given measurements from a cam-
182 Paper C Extended Target Tracking Using Polynomials

era, an inertia sensor and the motion of other vehicles tracked by the radar. This
implies that the camera easily can be removed and that the prior solely is based
on the motion model of the ego vehicle and other moving or stationary objects
tracked by the radar. The parameters in (40) can be interpreted as physical sizes.
The angle between the longitudinal axis of the vehicle and the road lane is ψRE .
It is assumed that this angle is small and hence the approximation sin ψRE ≈ ψRE
is used. The curvature parameter is denoted by c0 and the offset between the ego
vehicle and the white lane is denoted by lE .

All point state estimates x̂Pi are transformed into the ego vehicles coordinate
frame since the priors’s geometry (40) is given in this frame. We consider hy-
pothetical parabolas passing through each point x̂Pk parallel to the prior (40), i.e.,
the parameters ψRE and c0 are just inherited from the lane tracker and the lateral
distance lPk is given by
c  2
lPk = ŷPEk − ψRE x̂PEk − 0 x̂PEk . (41)
2
The likelihood `Pi Pk that a point xPi is on the hypothetical line of point Pk is then
given by
  
E E
N Pi Pk ; 0, PPk ,(2,2) , if x̂Pi ∈ GPk


`Pi Pk =  (42)
0,
 otherwise,
where the lateral distance between the point Pi and the proposed new line of
point Pk is given by
ik = ŷPEi − ŷPEk , (43)
with
c0  E 2
ŷPEk = lPk + ψRE x̂PEi + x̂ . (44)
2 Pi
The notation PPE ,(2,2) refers to the lower-right element, i.e., the variance in the
k
diagonal corresponding to yE . The gate GPk is defined as
E 2
" #   
 x y − ŷPk
 
E
 
GP k ,  ≤ δ , −δ < x − x̂ < δ . (45)

L s P e
 y E k

 PP ,(2,2) 

k

From all combinations of likelihoods we form a symmetric matrix ΓI . The columns


of ΓI are summed and the maximum value corresponding to column km is cho-
sen. If this column contains more than a certain number κ of non-zero rows a
parabola is formed from these points. The new line’s states xa are estimated by
solving a least square problem using the corresponding points. The states xstart
and xend are the minimum and maximum x-coordinate value of the points, respec-
tively. All elements in column km and rows im are set to zero and the procedure
is repeated until no column contains more than κ non-zero elements.
6 Application Example and Use of Prior Information 183

6.3 Experiments and Results


Let us start by showing the information given by an ordinary automotive acc
radar, for the traffic situation shown in Figure 2a. The ego vehicle, indicated by a
green circle, is situated at the (0, 0)-position in Figure 2b, and the red dots are the
radar reflections, or stationary observations, at one time sample. The smaller ma-
genta colored dots are former radar reflections, obtained at earlier time samples.
Figure 2c shows the estimated points and lines for the same scenario using the
kf eiv method presented in this contribution. The mean values of the states are
indicated by solid black lines or blue points. Furthermore, the state variance, by
means of the 90% confidence interval, is illustrated by gray lines or cyan colored
ellipses, respectively. The estimate of the lane markings (40), illustrated by the
gray dashed lines and derived according to the method presented in Lundquist
and Schön (2010), is shown here as a comparison. We also show the tracked vehi-
cle in front of the ego vehicle illustrated by a blue square.
In Figure 3a we see a traffic scenario with a freeway exit. The corresponding
bird’s eye view is shown in Figure 3c. The rightmost line indicates the guardrail
to the right of the exit, the middle line is the guardrail starting at the exit sign.
The radar measurements are indicated with red dots, the confirmed point targets
with blue dots and the unconfirmed points with cyan colored dots.
Our last example shows a situation from a rural road, see Figure 3d and 3b. Here
the road edges are not as distinct as on the freeway and the amount of clutter
is higher. This leads to parallel lines with larger covariance representing the
guardrail, trees etc. Appearance of many lines may already be confusing and
only the mean values of the lines are plotted for sake of clarity.
These results are only snapshots from different traffic situations. To show a more
rigorous result of our approach the shape of the estimated lines is compared with
the recorded driven trajectory from a 3 min drive on a freeway. The recorded posi-
tions 100 m in front of the car are used to estimate a polynomial with parameters
a1 and a2 at each time step k. It is then compared with the estimated parameters
â1 and â2 of the extended objects (lines) according to the mean absolute error
NL,k
 
K
1 X  1 X (j) 
MAE = ||â1 − a1 || (46)

K  NL,k


k=1 j=1

for a1 and similarly for a2 . The polynomials are transformed to be represented


in the same coordinate frame. The ego vehicle is equipped with a vision system
which measures the white lanes using computer vision and represents them ac-
cording to (40). The vision based parameters ψRE and c0 /2 are also compared
with the trajectory as in (46) and all results are shown in Table 2. Here, of course,
the vision system produces better results since the driven trajectory should be par-
allel with the lane markings and the objects along the road, e.g., the guardrails,
might not always be parallel. However, the table shows that our radar based ap-
proach is realistic and its performance is comparable to that of the vision based
algorithm enabling the possibility of fusion with complementary information or
184 Paper C Extended Target Tracking Using Polynomials

(a)

80 80

70 70

60 60

50 50

40 40
xE [m]

xE [m]

30 30

20 20

10 10

0 0

−10 −10

−20 −20
−30 −20 −10 0 10 −30 −20 −10 0 10
E
y [m] yE [m]

(b) (c)

Figure 2: A traffic situation is shown in Figure (a). Figure (b) shows the
radar measurements, and Figure (c) the resulting tracked points and lines.
The circle in the origin is the ego vehicle, the square is the tracked vehicle in
front and the dashed gray lines illustrate the tracked road curvature.
6 Application Example and Use of Prior Information 185

(a) (b)

80 80

70 70

60 60

50 50

40 40
xE [m]

xE [m]

30 30

20 20

10 10

0 0

−10 −10

−20 −20
−20 −10 0 10 20 −20 −10 0 10 20
yE [m] yE [m]

(c) (d)

Figure 3: Freeway exit with guardrails, the camera view is shown in Fig-
ure (a) and the bird’s eye view with the estimated states in Figure (c). A
traffic scenario from a rural road, with guardrails on both sides of a bridge is
shown in Figure (d) and (b).
186 Paper C Extended Target Tracking Using Polynomials

Table 2: mae values for the estimated lines based on radar measurements
and the lane estimate given by the computer vision, both with respect to the
driven trajectory.

Parameter Radar Vision


a1 ( · 10−3 ) 8.22 6.93
a2 ( · 10−3 ) 0.16 0.10

providing estimates when other approaches fail.

7 Conclusion
In this contribution we have considered the use of polynomials in extended tar-
get tracking. State space representations that enable the use of Kalman filters for
the polynomial shaped extended objects are introduced. We have also described
a multi-target tracking algorithm which is as general as possible. The loss of gen-
erality of the algorithm we presented lies in that the detection and initialization
of the extended tracks from point tracks would always require some application
dependent prior information. In the real world application example we have
presented, we tried to be as illustrative as possible about the use of this prior
information.
The approach has been evaluated on real and relevant data from both freeways
and rural roads in Sweden. The results are not perfect, but surprisingly good at
times, and of course contain much more information than just the raw measure-
ments. Furthermore, the standard state representation of the objects should not
be underestimated since it is compact and easy to send on a vehicle can-bus.
Some promising future investigations include the use of non-thresholded radar
data in a track-before-detect type of curve estimation and better representation
of curves in terms of intrinsic coordinates.

A Appendix
The need for a transformation from the sensor’s frame Es into the extended source’s
coordinate frame L was accentuated in (38) and described here in detail. The
transformation T LEs (zEs , xE , ξL ) is given by
 
zL = RLE REEs zEs + dEEs E + RLW (dEW
W W
− dLW ), (47)

where the displacement vector dEEs E is the mounting position of the sensor in
the vehicles coordinate frame, the rotation matrix REEs describes the mounting
orientation of the sensor in the vehicle and zEs is the Cartesian measurement
given by the sensor.
The covariance is derived by first extracting the noise term from the stochastic
A Appendix 187

variables according to
x̂E = xE + x̃E , x̃E ∼ N (0, PE ) (48a)
ξ̂L = ξL + ξ̃L , ξ̃L ∼ N (0, PξL ) (48b)
for the ego vehicle’s state variable xE and the origin of the line’s coordinate frame
ξL . The noise term of the sensor measurement z̃Es was defined in (11). Equation
(47) can now be rewritten according to
 
zL = RbLE REEs ẑEs + d E + R
Es E
bLW (dbW − dbW )
EW LW
bLE REEs z̃Es + A2 x̃E + A3 ξ̃L ,
+R (49a)
where the rotation matrices are given by
" #
cos ψ̂L sin ψ̂L −yEs cos ψ̂EL − xEs sin ψ̂EL
A2 , , (49b)
− sin ψ̂L cos ψ̂L xEs cos ψ̂EL − yEs sin ψ̂EL
− cos ψ̂L − sin ψ̂L a(1,3)
 
3

A3 ,  (2,3) 
, (49c)
sin ψ̂L − cos ψ̂L a3

and where
(1,3)
a3 = yEs cos ψ̂EL + (yW W
EW − yLW ) cos ψ̂L
+ xEs sin ψ̂EL − (xW W
EW − xLW ) sin ψ̂L , (49d)
(2,3)
a3 = −xEs cos ψ̂EL − (xW W
EW − xLW ) cos ψ̂L
+ yEs sin ψ̂EL − (yW W
EW − yLW ) sin ψ̂L . (49e)
To summarize these calculations the covariance of zL is given by
E
ΣcL = R
bLEs Σc s (R
bLEs )T + A2 PE AT + A3 Pξ AT
2 L 3
(50)
E
where Σc s is the Cartesian measurement covariance, PE is the state covariance of
the ego vehicles pose and PξL is the line state covariance corresponding to the
position and orientation of the line’s coordinate frame.
188 Paper C Extended Target Tracking Using Polynomials

Bibliography
D. Angelova and L. Mihaylova. Extended object tracking using Monte Carlo meth-
ods. IEEE Transactions on Signal Processing, 56(2):825–832, February 2008.
Y. Bar-Shalom and T. E. Fortmann. Tracking and Data Association. Mathematics
in science and engineering. Academic Press, Orlando, FL, USA, 1988.
Y. Bar-Shalom and X. R. Li. Multitarget-Multisensor Tracking: Principles, Tech-
niques. YBS Publishing, Storrs, CT, USA, 1995.
R. H. Bartels, J. C. Beatty, and B. A. Barsky. An introduction to splines for use
in computer graphics and geometric modeling. M. Kaufmann Publishers, Los
Altos, CA, USA, 1987. ISBN 0-934613-27-3 ;.
Å Björck. Numerical methods for least squares problems. SIAM, Philadelphia,
PA, USA, 1996.
S. S. Blackman and R. Popoli. Design and Analysis of Modern Tracking Systems.
Artech House, Norwood, MA, USA, 1999.
A. Blake and M. Isard. Active contours. Springer, London, UK, 1998.
Y. Boers, H. Driessen, J. Torstensson, M. Trieb, R. Karlsson, and F. Gustafsson.
Track-before-detect algorithm for tracking extended targets. IEE Proceedings
of Radar, Sonar and Navigation, 153(4):345–351, August 2006.
D. L. Boley and K. Sutherland. Recursive total least squares: An alternative to the
discrete Kalman filter. In AMS Meeting on Applied Linear Algebra, DeKalb, IL,
USA, May 1993.
B. De Moor. Structured total least-squares and L2 approximation-problems. Lin-
ear algebra and its applications, 188–189:163–207, 1993.
J. C. Dezert. Tracking maneuvering and bending extended target in cluttered
environment. In Proceedings of Signal and Data Processing of Small Targets,
volume 3373, pages 283–294, Orlando, FL, USA, April 1998. SPIE.
R. Diversi, R. Guidorzi, and U. Soverini. Algorithms for optimal errors-in-
variables filtering. Systems & Control Letters, 48(1):1–13, 2003.
R. Diversi, R. Guidorzi, and U. Soverini. Kalman filtering in extended noise envi-
ronments. IEEE Transactions on Automatic Control, 50(9):1396–1402, Septem-
ber 2005.
O. E. Drummond, S. S. Blackman, and G. C. Pretrisor. Tracking clusters and
extended objects with multiple sensors. In Proceedings of Signal and Data
Processing of Small Targets, volume 1305, pages 362–375, Orlando, FL, USA,
January 1990. SPIE.
B. Fardi, U. Scheunert, H. Cramer, and G. Wanielik. Multi-modal detection and
parameter-based tracking of road borders with a laser scanner. In Proceedings
Bibliography 189

of the IEEE Intelligent Vehicles Symposium, pages 95–99, Columbus, OH, USA,
June 2003.
K. Gilholm and D. Salmond. Spatial distribution model for tracking extended
objects. IEE Proceedings of Radar, Sonar and Navigation, 152(5):364–371, Oc-
tober 2005.
K. Gilholm, S. Godsill, S. Maskell, and D. Salmond. Poisson models for extended
target and group tracking. In Proceedings of Signal and Data Processing of
Small Targets, volume 5913, pages 230–241, San Diego, CA, USA, August 2005.
SPIE.
R. Guidorzi, R. Diversi, and U. Soverini. Optimal errors-in-variables filtering.
Automatica, 39(2):281–289, 2003. ISSN 0005-1098.
F. Gustafsson. Automotive safety systems. IEEE Signal Processing Magazine, 26
(4):32–47, July 2009.
E. Hellström. Look-ahead Control of Heavy Vehicles. PhD thesis No 1315, Lin-
köping Studies in Science and Technology, Linköping, Sweden, May 2010.
P. J. Huber and E. Ronchetti. Robust Statistics. John Wiley & Sons, New York, NY,
USA, 2009.
M. Isard and A. Blake. CONDENSATION-conditional density propagation for
visual tracking. International Journal of Computer Vision, 29(1):5–28, August
1998.
R. Karlsson and F. Gustafsson. Bayesian surface and underwater navigation. IEEE
Transactions on Signal Processing, 54(11):4204–4213, November 2006.
M. Kass, A. Witkin, and D. Terzopoulos. Snakes: Active contour models. Inter-
national Journal of Computer Vision, 1(4):321–331, January 1988.
A. Kirchner and T. Heinrich. Model based detection of road boundaries with
a laser scanner. In Proceedings of the IEEE Intelligent Vehicles Symposium,
pages 93–98, Stuttgart, Germany, October 1998.
K. R. S. Kodagoda, W. S. Wijesoma, and A. P. Balasuriya. CuTE: curb tracking and
estimation. IEEE Transactions on Control Systems Technology, 14(5):951–957,
September 2006.
S. Lazarus, A. Tsourdos, B.A. White, P.M.G. Silson, and R. Zandbikowski. Air-
borne vehicle mapping of curvilinear objects using 2-D splinegon. IEEE Trans-
actions on Instrumentation and Measurement, 59(7):1941–1954, July 2010.
L. Ljung. System identification, Theory for the user. System sciences series. Pren-
tice Hall, Upper Saddle River, NJ, USA, 2 edition, 1999.
C. Lundquist and T. B. Schön. Estimation of the free space in front of a moving
vehicle. In Proceedings of the SAE World Congress, SAE paper 2009-01-1288,
Detroit, MI, USA, April 2009.
190 Paper C Extended Target Tracking Using Polynomials

C. Lundquist and T. B. Schön. Joint ego-motion and road geometry estimation.


Information Fusion, 2010. DOI: 10.1016/j.inffus.2010.06.007.
C. Lundquist, U. Orguner, and T. B. Schön. Tracking stationary extended objects
for road mapping using radar measurements. In Proceedings of the IEEE In-
telligent Vehicles Symposium, pages 405–410, Xi’an, China, June 2009.
C. Lundquist, U. Orguner, and F Gustafsson. Estimating polynomial structures
from radar data. In Proceedings of the International Conference on Informa-
tion Fusion, Edinburgh, UK, July 2010.
C. Lundquist, U. Orguner, and F. Gustafsson. Extended target tracking using
polynomials with applications to road-map estimation. IEEE Transactions on
Signal Processing, 59(1):15–26, January 2011.
R. P. S. Mahler. Statistical Multisource-Multitarget Information Fusion. Artech
House, Boston, MA, USA, 2007.
I. Markovsky and S. van Huffel. Overview of total least-squares methods. Signal
Processing, 87(10):2283–2302, 2007. ISSN 0165-1684.
A. Polychronopoulos, A. Amditis, N. Floudas, and H. Lind. Integrated object
and road border tracking using 77 GHz automotive radars. IEE Proceedings of
Radar, Sonar and Navigation, 151:375–381, December 2004.
D. B. Reid. An algorithm for tracking multiple targets. IEEE Transactions on
Automatic Control, 24(6):843–854, 1979.
B. Ristic and D. J. Salmond. A study of a nonlinear filtering problem for tracking
an extended target. In Proceedings of the International Conference on Infor-
mation Fusion, pages 503–509, Stockholm, Sweden, June 2004.
B. Ristic, S. Arulampalam, and N. Gordon. Beyond the Kalman Filter: Particle
filters for tracking applications. Artech House, London, UK, 2004.
B. Roorda and C. Heij. Global total least squares modeling of multivariable time
series. IEEE Transactions on Automatic Control, 40(1):50–63, January 1995.
D. J. Salmond and M. C. Parr. Track maintenance using measurements of tar-
get extent. IEE Proceedings of Radar, Sonar and Navigation, 150(6):389–395,
December 2003.
T. Söderström. Survey paper: Errors-in-variables methods in system identifica-
tion. Automatica, 43(6):939–958, June 2007.
H. Svensson. Simultaneous localization and mapping in marine environment
using radar images. Master’s Thesis No LiTH-ISY-EX-4285, Department of
Electrical Engineering, Linköping University, Sweden, 2009.
S. van Huffel and H. Zha. An efficient Total Least Squares algorithm based on a
rank-revealing two-sided orthogonal decomposition. Numerical Algorithms, 4:
101–133, February 1993.
Bibliography 191

H. L. van Trees. Detection, Estimation, and Modulation Theory. John Wiley &
Sons, New York, NY, USA, 1968.
J. Vermaak, N. Ikoma, and S. J. Godsill. Sequential Monte Carlo framework for
extended object tracking. IEE Proceedings of Radar, Sonar and Navigation, 152
(5):353–363, October 2005.
Y. Wang, L. Bai, and M. Fairhurst. Robust road modeling and tracking using
condensation. IEEE Transactions on Intelligent Transportation Systems, 9(4):
570–579, December 2008.
M. J. Waxman and O. E. Drummond. A bibliography of cluster (group) tracking.
In Proceedings of Signal and Data Processing of Small Targets, volume 5428,
pages 551–560, Orlando, FL, USA, April 2004. SPIE.
W. S. Wijesoma, K. R. S. Kodagoda, and A. P. Balasuriya. Road-boundary de-
tection and tracking using ladar sensing. IEEE Transactions on Robotics and
Automation, 20(3):456–464, June 2004.
Paper D
Road Intensity Based Mapping using
Radar Measurements with a
Probability Hypothesis Density Filter

Authors: Christian Lundquist, Lars Hammarstrand and Fredrik Gustafsson

Edited version of the paper:


C. Lundquist, L. Hammarstrand, and F. Gustafsson. Road intensity
based mapping using radar measurements with a probability hypoth-
esis density filter. IEEE Transactions on Signal Processing, 59(4):1397–
1408, April 2011b.

The paper presents data that was previously published in:


C. Lundquist, L. Danielsson, and F. Gustafsson. Random set based road
mapping using radar measurements. In Proceedings of the European
Signal Processing Conference, pages 219–223, Aalborg, Denmark, Au-
gust 2010a.

Preliminary version:
Technical Report LiTH-ISY-R-2994, Dept. of Electrical Engineering, Lin-
köping University, SE-581 83 Linköping, Sweden.
Road Intensity Based Mapping using Radar
Measurements with a Probability
Hypothesis Density Filter

Christian Lundquist∗ , Lars Hammarstrand∗∗ and Fredrik Gustafsson∗

∗ Dept.
of Electrical Engineering, ∗∗ Volvo
Car Corporation
Linköping University, Active Safety Electronics
SE–581 83 Linköping, Sweden SE-405 31 Göteborg, Sweden
[email protected], [email protected]
[email protected]

Abstract
Mapping stationary objects is essential for autonomous vehicles and
many autonomous functions in vehicles. In this contribution the prob-
ability hypothesis density (phd) filter framework is applied to auto-
motive imagery sensor data for constructing such a map, where the
main advantages are that it avoids the detection, the data associa-
tion and the track handling problems in conventional multiple-target
tracking, and that it gives a parsimonious representation of the map
in contrast to grid based methods. Two original contributions address
the inherent complexity issues of the algorithm: First, a data cluster-
ing algorithm is suggested to group the components of the phd into
different clusters, which structures the description of the prior and
considerably improves the measurement update in the phd filter. Sec-
ond, a merging step is proposed to simplify the map representation
in the phd filter. The algorithm is applied to multi-sensor radar data
collected on public roads, and the resulting map is shown to well de-
scribe the environment as a human perceives it.

1 Introduction
Autonomous vehicles and autonomous functionality in vehicles require situation
awareness. Situation awareness is traditionally split into two main tasks: track-
ing of moving objects and navigation relative to stationary objects represented by
a map. The precision in commercial maps and position services today is not suffi-
cient for autonomous functions. For that reason, imagery sensors as radar, vision
and laser scanners have been used to build local maps of stationary objects on the
fly, which requires sophisticated estimation algorithms. Similarly to the classic
target tracking application, going from single-sensor single-object estimation to
multi-sensor multi-object estimation increases the complexity of the algorithms

195
196 Paper D Road Intensity Based Mapping using Radar Measurements

substantially. For that reason, there is a strong need for structured maps rather
than point maps, where single objects that together form structures as lines or
curves are described as one extended object. This contribution describes an effi-
cient algorithm to create maps that are particularly useful to describe structures
along road sides. Such a map is promising for a range of applications, from colli-
sion avoidance maneuvers to true autonomy on public roads and off-road.

There exist many algorithms and representations of maps using imagery sensor
data. However, so far methods based on probabilistic theory are most success-
ful, see e.g., Thrun et al. (2005) or Adams et al. (2007) for a recent overview.
Feature based maps have become very popular in the related field simultaneous
localization and mapping (slam) Durrant-Whyte and Bailey (2006); Bailey and
Durrant-Whyte (2006), where a vehicle builds a map of environmental features,
while simultaneously using that map to localize itself. Location based maps,
such as the occupancy grid map (ogm) Elfes (1987), were primarily developed
to be used with laser scanners to generate consistent maps. They have also been
used with radar sensors, where the data comprises a complete signal power pro-
file Mullane et al. (2009); Foessel-Bunting et al. (2001), or a thresholded radar
report Lundquist et al. (2009). The ogm was very popular at the DARPA urban
challenge, see the special issues Buehler et al. (2008). Generally, line shaped and
curved objects, such as roads and coast lines can be identified in topographic
maps. Road edge and obstacle detection have been tackled using raw radar im-
ages, see e.g., Kaliyaperumal et al. (2001); Lakshmanan et al. (1997); Nikolova
and Hero (2000); Ma et al. (2000). There have been several approaches making
use of reflections from the road edges, such as guard rails and reflection posts, to
compute information about the free space, see e.g. Kirchner and Heinrich (1998);
Kirchner and Ameling (2000); Sparbert et al. (2001) for some examples using
laser scanners. Radar reflections were used to track the road edges as extended
targets in Lundquist et al. (2011b). This method is promising but a drawback is
the large data association problem, which arises since it is hard to create a general
model for the various types of objects.

The bin occupancy filter is devised via a quantized state space model, where each
cell is denoted bin Erdinc et al. (2009). In the limit, when the volume of the bins
become infinitesimal, the filter equations are identical to the probability hypothe-
sis density (phd) filter, proposed by Mahler Mahler (2000, 2003, 2007). The phd
is the first moment density or intensity density of a random set. In Section 2 we
propose to represent the map by the surface described by the intensity density
of point sources of stationary objects. Individual sources are not tracked, but
the phd filter framework makes it possible to estimate the intensity of sources
in a given point. To describe the map by a continuous surface distinguishes our
method from the other mapping approaches mentioned above. An approach to
make use of the phd filter to solve the related slam problem is presented in Mul-
lane et al. (2008). In contrast to the intensity based map, the map in Mullane et al.
(2008) is represented by a finite set of map features, which are extracted from the
intensity density.
2 Mapping 197

Section 3 summarizes the Gaussian mixture phd (gm-phd) filter recursion. We


investigate the possibility to improve the efficiency of the gm-phd filter by uti-
lizing structure in the map. These structures are first identified and modeled in
Section 4, to then be used to improve the prior in the gm-phd recursion and to
create an efficient representation of the intensity in Section 5. Road edges and
guardrails are typical examples of such map structures. All parts of the road
mapping example are put together in Section 6 to present the complete map,
based on measurements from automotive radar sensors. The map may be used
by automotive safety functions, for instance trajectory control, which aims at min-
imizing the probability to hit objects. Other examples are functions which can
perform automatic evasive maneuvers, and need map information for threat as-
sessment Eidehall et al. (2007b). The conclusions are in Section 7.

2 Mapping
Sensors which measure range r and bearing ψ, e.g., radar and laser, are commonly
used for automotive and robotics navigation and mapping. These type of sensors
typically do not give the user access to their detection and decision parameters.
Automotive radar sensors threshold the amplitude and deliver only those detec-
tions with the highest amplitude, whereas a laser sensor delivers the first range
detection along a bearing angle. Furthermore, cameras measure two angles and
represent them in a pixel matrix. The considered sensors provide a set of noisy
measurements (thresholded detections)
n o
Zk = z(1) , z(2) , . . . , z(Nz,k ) (1)
k k k
at each discrete time instant k = 1, . . . , K.

Landmark detections from the noisy sensor data are used to build a probabilistic
map, represented by a set of Nm objects in the environment
n o
Mk = m(1)
k , m
(2)
k , . . . , m
(Nm )
k
. (2)
There exists primarily two types of indexing for probabilistic maps Thrun et al.
(2005). In a feature based map each m(n) specifies the properties and location of
one object Smith et al. (1988), whereas in a location based map the index n corre-
sponds to a location and m(n) is the property of that specific coordinate. Note that
the indexing is analogous to the representation of sensor data, the range and bear-
ing measurements from a radar or laser are feature based, whereas the pixels of a
camera are location based measurements. The Occupancy grid map is a classical
location based representation of a map, where each cell of the grid is assigned a
binary occupancy value that specifies if the location n is occupied (m(n) = 1) or
not (m(n) = 0) Elfes (1987); Moravec (1988). The aim of all stochastic mapping
algorithms, independent of indexing, is to estimate the posterior density of the
map
p(Mk |Z1:k ), (3)
198 Paper D Road Intensity Based Mapping using Radar Measurements

given all the measurements from time 1 to k.


The bin-occupancy filter, which is described in Erdinc et al. (2009), aims to esti-
mate the probability of a target being in a given point. The approach is derived
via a discretized state-space model of the surveillance region, where each grid
cell (denoted bin in this approach) can or may not contain a target. One of the im-
portant assumptions in Erdinc et al. (2009) is that the bins are sufficiently small
so that each bin is occupied by maximum one target. In the limiting case, when
the volume of the bins |ν| goes to zero, it is possible to define the bin-occupancy
density
(n)
Pr(mk = 1|Z1:k )
Dk|k , lim , (4)
|ν|→0 |ν|
(n)
where Pr(mk = 1|Z1:k ) is the probability that bin n is occupied by one target.
The continuous form of the bin-occupancy filter prediction and update equations
are the same as the phd filter equations Erdinc et al. (2009). Furthermore, the
phd is the first moment density or intensity density in point process theory, see
e.g., Mahler (2000, 2007), and a physical interpretation is given in Daley and
Vere-Jones (2003) as the probability that one target is located in the infinitesimal
region (x,x + dx) of the state space, divided by dx. The continuous form of the
physical bin model leads us to a continuous location based map which we denote
intensity based map, and we intend to estimate it with the phd filter.
The bin occupancy filter and the phd filter were developed for target tracking of
point sources; however, the aim of the present contribution is to create a proba-
bilistic location based map of the surroundings of a moving vehicle. One of the
main differences between standard target tracking problems and the building of
a location based map, is that many objects such as, guardrails or walls, are typi-
cally not point targets, but extended targets Mahler (2007); Gilholm et al. (2005);
Gilholm and Salmond (2005). Furthermore, there is no interest in keeping track
of the identity of specific objects. Nevertheless, the bin-occupancy filter attempts
to answer the important question: “Is there an object (target) at a given point?".
Erdinc et al. Erdinc et al. (2009) pose the following assumptions for the bin occu-
pancy filter:
1. The bins are sufficiently small so that each bin is occupied by at most one
target.
2. One target gives rise to only one measurement.
3. Each target generates measurements independently.
4. False alarms are independent of target originated measurements.
5. False alarms are Poisson distributed.
Here, only point 2 needs some extra treatment if the aim of the algorithm is
mapping and not target tracking. It can be argued that the measurements of the
point sources belong to extended objects and that the aim is to create a map of
2 Mapping 199

those point sources. Also for mapping cases the assumption that there will not
be two measurements from the same point at the same time is justified. The
described relation is modeled by a likelihood function p(Zk |Mk|k ), which maps
the Cartesian map to polar point measurements.

So far in this section the discussion has been quite general and the phd or the
intensity has only been considered as a surface over the surveillance region. The
first practical algorithms to realize the phd filter prediction and measurement
update equations were based on the particle filter, see e.g., Vo et al. (2003); Zajic
and Mahler (2003); Sidenbladh (2003), where the phd is approximated by a large
set of random samples (particles). A Gaussian mixture approximation of the phd
(gm-phd) was proposed by Vo and Ma Vo and Ma (2006). The mixture is repre-
sented by a sum of weighted Gaussian components and in particular the mean
and covariance of the components are propagated by the Kalman filter. In this
work the intensity is represented by a Gaussian mixture, since the parametriza-
tion and derivation is simpler than for a particle filter based solution. The mod-
eling of the intensity through a number of Gaussian components makes it also
simpler to account for structures in the map. We will return to these structures
in the next two sections.

The gm-phd filter estimates the posterior intensity, denoted Dk|k , as a mixture of
Gaussian densities as,
Jk|k  
(i) (i) (i)
X
Dk|k = wk|k N mk|k , Pk|k , (5)
i=1
(i)
where Jk|k is the number of Gaussian components and wk|k is the expected number
(i) (i)
of point sources covered by the density N (mk|k , Pk|k ). In Section 3 it is shown how
the intensity is estimated with the gm-phd filter. The Gaussian components are
(i) (i)
parametrized by a mean mk|k and a covariance value Pk|k , which are expressed in
a planar Cartesian coordinate frame, according to
(i) (i) T
h i
mk = x(i)
k y k
. (6)

The aim of the mapping algorithm is to estimate the posterior density (3). The
considered intensity based map is continuous over the surveillance region, thus,
the volume of the bins becomes infinitesimal, and the number of elements in (2)
tends to infinity (Nm → ∞). Furthermore, the intensity is a summary statistic of
the map according to
p(Mk |Z1:k ) ∼ p(Mk ; Dk|k ), (7)
see e.g., Mahler (2003), and the estimated intensity Dk|k is parametrized by
 
(i) (i) (i) (i)
µk , wk , mk , Pk (8)

of the Gaussian sum (5). The intensity based map is a multimodal surface with
200 Paper D Road Intensity Based Mapping using Radar Measurements

Z (2)

Z (1)

Z (3)

Figure 1: The observation space of the forward looking radar is denoted Z (1)
and the observation spaces of the side-looking radars are Z (2) and Z (3) .

peaks around areas with many sensor reflections or point sources. It is worth
observing that the map M is described by a location based function (5), with fea-
ture based parametrization (8). The intensity is a distribution and the graphical
representation provides an intuitive description of the map.

An automotive mapping example will be studied to exemplify the methods de-


scribed in each section. In this example, an intensity based map is constructed of
the infrastructure surrounding an ego vehicle equipped with three radar sensors.
The aim is to represent stationary radar reflectors, i.e. reflection points on road
side objects, such as, guard rails or lampposts, using these radar measurements.
(s)
Each sensor (s = 1, 2, 3) collects a set of individual observations m = 1, . . . , Nz,k
of range r (m) , range rate ṙ (m) and bearing angle ψ (m) to strong radar reflectors
according to
(s)

(m)
h iT Nz,k
zk = r (m) ṙ (m) ψ (m) . (9)
k m=1

The three sensors are one forward looking 77 GHz mechanically scanning fmcw
radar, with measurement space Z (1) , and two medium distance 24 GHz radars,
with measurement spaces Z (2) and Z (3) , which are mounted on the vehicle as
illustrated in Figure 1. The radars constitute a multi-sensor system, measuring
at different ranges and in different directions, but with an overlap of observa-
tion spaces. One of the most significant advantages with the phd filter is that
this framework scales very well for multi-sensor multi-target problem, and the
classically hard sensor fusion task is solved more or less automatically. A coordi-
nate frame E is attached to the moving ego vehicle, see Figure 3. The trajectory
x1:k = x1 , . . . , xk of the ego vehicle is assumed to be a priori known in this work,
and it is defined through the sequence of all its positions, orientations and veloc-
ities relative the world coordinate frame W up to time k.

Note, that the map is parametrized in a very compact format (8), and that the
parameters easily can be sent over the can-bus to decision systems in a car.
3 GM-PHD Filter 201

3 GM-PHD Filter
The phd filter recursion is described in e.g., Mahler (2003), the bin occupancy
filter is described in Erdinc et al. (2009) and the gm-phd filter recursion is de-
scribed in Vo and Ma (2006). This section intents to summarize the main algo-
rithmic concept of those contributions and apply the filter to the mapping appli-
cation. The mentioned filters are primarily developed for multiple-target track-
ing, where, at least for the phd filter, the targets are defined as a random finite
set, and where the extraction of the most likely target states at the end of each
recursion cycle is an important output. However, the output for the proposed
mapping is the intensity Dk|k , which describes the density of sensor reflections or
point sources. The filter recursion is given by the time update in Section 3.1 and
the measurement update in Section 3.2 below.

3.1 Time Evolution


Consider the bin occupancy model, a bin can be occupied at time k + 1, if it
already was occupied at time k, or if a new point source appears in the map at
time k + 1. A new source can appear either by spontaneous birth, as a completely
new object, or by spawning from an existing object.
In the mapping application the following definitions are made: If a bin turns oc-
cupied independent of other bins’ occupancy it is denoted birth, whereas if a bin
turns occupied dependent on the occupancy of other bins it is denoted spawn.
The latter is for example the case if radar reflections stem from an extended ob-
ject, such as a guardrail, and new point sources appear on other positions on the
same object. The prediction of the intensities is given by the sum of the exist-
ing intensity Ds,k+1|k , the spawn intensity Dβ,k+1|k and the birth intensity Db,k+1
according to
Dk+1|k = Ds,k+1|k + Dβ,k+1|k + Db,k+1 . (10)
The prediction for the existing intensity is given by
Jk|k  
(i) (i) (i)
X
Ds,k+1|k = pS,k+1 wk|k N mk+1|k , Pk+1|k , (11)
i=1
(i) (i)
where the Gaussian components N (mk+1|k , Pk+1|k ) are derived using the time up-
date step of, e.g., the Kalman filter. The probability of survival is denoted pS . The
spawn intensity is a Gaussian mixture of the form
Jβ,k+1  
(j) (j) (j)
X
Dβ,k+1|k = wβ,k+1|k N mβ,k+1|k , Pβ,k+1|k , (12)
j=1

(j) (j) (j)


where wβ,k+1|k , mβ,k+1|k and Pβ,k+1|k for j = 1, . . . , Jβ,k+1 , are model parameters that
determine the shape of the spawn intensity. In the original Gaussian mixture phd
(j) (j)
filter Vo and Ma (2006) the components N (mβ,k+1|k , Pβ,k+1|k ) are in the proximity
202 Paper D Road Intensity Based Mapping using Radar Measurements

of its parents. However, in the mapping application the existing stationary point
sources are likely to belong to extended object, which describe some structure
in the map, from which new point sources are likely to be spawned. Typical
examples of such structures are guard rails in automotive examples or coast lines
in terrain mapping. An approach is proposed where the components are derived
(i) (i)
using a model-based process, which takes the existing components N (mk|k , Pk|k )
at time step k as input. A method, which clusters the components and estimates
the model parameters simultaneously is described in Section 4. The Gaussian
(j) (j)
mixture components N (mβ,k+1|k , Pβ,k+1|k ) are then sampled from the estimated
models.

The birth intensity is assumed to be a Gaussian mixture of the form


JX
b,k+1  
(j) (j) (j)
Db,k+1 = wb,k+1 N mb,k+1 , Pb,k+1 , (13)
j=1

(j) (j) (j)


where wb,k+1 , mb,k+1 and Pb,k+1 for j = 1, . . . , Jb,k+1 , are model parameters that
(j) (j)
determine the shape of the birth intensity. The components N (mb,k+1 , Pb,k+1 ) are
uniformly distributed over the state space.

3.2 Measurement Update


The measurement update is given by a sum of intensities according to
X
Dk|k = (1 − pD,k )Dk|k−1 + Dd,k|k , (14)
z∈Zk

where pD is the probability of detection. The updated intensity Dd,k|k is given by


JX
k|k−1  
(i) (i) (i)
Dd,k|k = wk (z)N mk|k (z), Pk|k (z) , (15)
i=1
where the the number of Gaussian components is Jk|k−1 = Jk−1|k−1 + Jβ,k|k−1 + Jb,k .
(i) (i)
The Gaussian components N (mk|k (z), Pk|k (z)) are calculated by using the mea-
surement update step of, e.g., the Kalman filter. Furthermore, the weights are
updated according to
(i) (i)
(i) pD,k wk|k−1 qk (z)
wk (z) = PJk|k−1 (`) , (16a)
(`)
κk + pD,k `=1 wk|k−1 qk (z)
 
(i) (i)
q(i) (z) = N z; ηk|k−1 , Sk|k−1 , (16b)

(i) (i)
where ηk|k−1 is the predicted measurement and Sk|k−1 is the innovation covari-
ance from the i th component in the predicted intensity. The clutter intensity at
time k is denoted κk , and the clutter measurements are assumed to be Poisson
4 Joint Clustering and Estimation 203

distributed.

4 Joint Clustering and Estimation

The spawning process was briefly introduced in Section 3.1. Provided that there
exists a certain structure in between the single point objects in the map, then this
structure can be used to improve the spawning process. This section describes
how the map structure can be found, modeled and estimated. We propose a re-
gression clustering algorithm, which alternates between clustering the Gaussian
components and estimating the model parameters, using a simple weighted least
squares method. The standard K-means cluster algorithm is first described in Sec-
tion 4.1 followed by the joint clustering and estimation algorithm in Section 4.2.
Road edges are typical structures in a map, and it is shown in Section 4.3 how to
apply the joint clustering and estimation algorithm to estimate these.

4.1 K -Means Clustering

Given a set of Ny observations y, then the clustering algorithm aims to find a


partition Y = {Y(1) , . . . , Y(K) } of the observations, where K < Ny . Note, that in
this section y represents any arbitrary observation, which should not be confused
with the sensor measurements z. The original K-means clustering algorithm, in-
troduced by Lloyd Lloyd (1982), aims at minimizing the within-cluster sum of
squares of the distances according to
K
X X
min ||y(i) − µ(k) ||2 , (17)
Y(1) ,...,Y(K)
k=1 y(i) ∈Y(k)
Y(1) ]...]Y(K) =Y

where µ(k) is the mean value of the set Y(k) . A solution is found by alternating
between the assignment step, which assigns each observation to the cluster with
the closest mean, and the update step, which calculate the new means, until con-
vergence, see e.g., the textbooks Bishop (2006); Hastie et al. (2009); Duda et al.
(2001). A study of the convergence properties of the K-means algorithm is given
in MacQueen (1967).

4.2 Regression Clustering

Now, suppose that the observations y(i) of the set Y(k) are produced by a regres-
sion model, with the regressors ϕ (i) , according to
y(i) = (ϕ (i) )T θ (k) + e(i) , y(i) ∈ Y(k) , e(i) ∼ N (0, R(i) ), (18)
and that the objective is to identify the parameters θ (k) for every expected clus-
ter k = 1, . . . , K. The observations y(i) within one set Y(k) are chosen to fit the
common parameter θ (k) , i.e., the aim is to minimize the sum of squared residuals
204 Paper D Road Intensity Based Mapping using Radar Measurements

within each cluster according to


K
X X
min min ||y(i) − (ϕ (i) )T θ (k) ||2R(i) , (19)
Y(1) ,...,Y(K) θ (k)
k=1 y(i) ∈Y(k)
Y(1) ]...]Y(K) =Y

using the norm ||x||2R = xT R−1 x. This can be done by first clustering all observa-
tions using K-means (17) and thereafter estimating the parameters θ (k) for each
cluster. However, we propose a joint clustering and estimation algorithm, which
instead of clustering the observations around mean values µ(k) , clusters around
models (ϕ (i) )T θ (k) . Similar methods are used to solve for piecewise affine systems,
see e.g., Roll (2003); Ferrari-Trecate et al. (2003), and for switching dynamic sys-
tems, see e.g., Petridis and Kehagias (1998), where the observation regions and
submodels are identified iteratively. A solution is found by alternating between
an assignment step and an estimation step until convergence. These steps are
described in detail below.

Assignment Step

(k) (k)
Given an initial guess of K parameters θ̂0 and covariances P0 , for k = 1 . . . , K;
each observation y(i) is first assigned to the cluster with the smallest estimation
error according to
(k)
n (k) (j) o
Yτ = y(i) : y(i) − (ϕ (i) )T θ̂τ (k,i) ≤ y
(i)
− (ϕ (i) )T θ̂τ (k,i) , ∀j , (20a)
Sτ Sτ

using the norm ||x||2S =x T


S −1 x, where
(k,i) (k)
Sτ = (ϕ (i) )T Pτ ϕ (i) + R(i) . (20b)
(k) (k)
The covariance of the parameter estimate, denoted Pτ = Cov θ̂τ , is calculated
in the estimation step. Note, that the discrete iteration index is denoted τ here.

Estimation Step

The second step is the update step, where new parameters θ̂ (k) are estimated
for each cluster. Any estimation algorithm may be used, e.g., the weighted least
squares according to
 −1
 X 
(k)  X
ϕ (i) (R(i) )−1 (ϕ (i) )T  ϕ (i) (R(i) )−1 (ϕ (i) )T y(i) ,

θ̂τ+1 =  (21a)

 
(i) (k)  (i) (k)
y ∈Yτ y ∈Yτ
 −1
 X 
(k) 
ϕ (i) (R(i) )−1 (ϕ (i) )T  .

Pτ+1 =  (21b)
 (k)

y(i) ∈Yτ

The assignment step (20) and estimation step (21) are iterated until the assign-
ments no longer change.
4 Joint Clustering and Estimation 205

4.3 Road Edge Estimation


To illustrate the joint clustering and estimation algorithm proposed above, we
return to the example discussed in Section 2. In this example, it is likely that
new stationary objects are spawned from road-side objects, such as guard rails.
Consequently, we propose to use a simplified version of the road edge model
in Lundquist and Schön (2009) in the joint clustering and estimation algorithm
to find the road edge in the current map. From this estimate of the road edge the
spawning intensity can be described.
The road edge is described by a polynomial model, linear in its parameters
(k)
y = a0 + a1 x + a2 x2 + a3 x3 , (22)
given the x and y-coordinates of the Gaussian components, and expressed in the
vehicle’s coordinate frame E. Furthermore, assume that the left and right edge of
the road are approximately parallel and that they can be modeled using at most
(k)
K polynomials (22). The polynomials only differ by the lateral distances a0 for
the k = 1, . . . , K clusters.
Given the Jk|k Gaussian components, the parameters
h iT
θ (k) = a(k)
0 a 1 a 2 a3 (23)
are estimated by rewriting the linear predictor (22) according to
 T
y(i) = ϕ (i) θ (k) , (24)
where the regressors are given by
h iT
ϕ (i) = 1 x(i) (x(i) )2 (x(i) )3 (25)
for the mean values of the components (8) in cluster k. The values of the pa-
rameters θ (k) are found by minimizing the estimation error within each cluster
according to (19), i.e.,
K
X X
min min ||y(i) − (ϕ (i) )T θ̂ (k) ||2R(i) . (26)
Y(1) ,...,Y(K) θ (k)
k=1 y(i) ∈Y(k)
Y(1) ]...]Y(K) =Y
h iT
In this case the observations y(i) are the mean values m(i) = x(i) y(i) of the
Gaussian components and the noise covariances R(i)
are the estimated weighted
covariances P (i) /w(i) . The parameters θ (k) are estimated iteratively using the as-
signment step (20) and the estimation step (21). Since radar measurements also
are obtained by objects on the opposite side of the freeway, the states are clus-
(1)
tered into four sets. A fair initial guess for K = 4 clusters could be a0 = 10,
(2) (3) (4)
a0 = −10, a0 = 30, a0 = −30 and a1 = a2 = a3 = 0.
Figure 2 shows the four resulting clusters in two traffic situations. The stars are
the mean values of the Gaussian components, which are used as input to the joint
206 Paper D Road Intensity Based Mapping using Radar Measurements

clustering and estimation algorithm, and the lines are the resulting road edge
models ϕ T θ (k) . The past and future path of the ego vehicle is shown by black
dots in the figure and can be used as a good reference of the road edges being
correctly estimated. The number of components in each cluster indicates how
many objects that should be spawned from the cluster. The spawning along the
road edges is described in Section 6.2.

5 Merging
The Gaussian mixture formulation of the phd filter tends to suffer from a high
computational complexity as the number of mixture components, Jk|k , in (5)
grows rapidly. To keep the complexity under control, methods are typically used
(i)
to prune mixture components with too low weight wk|k , which makes small con-
tribution to the posterior density, and to merge components that are similar. The
general merging process is discussed in Section 5.1. However, while keeping the
complexity in check, it is important not to lose too much information in the pro-
cess. The errors introduced in pruning and merging are analyzed in Clark and
Vo (2007). In Section 5.2, we propose a general algorithm, which can be used
for merging Gaussian components if more information about the system is avail-
able. The state space is often chosen since it is a convenient space to perform
tracking in, i.e., to predict and update tracks in. However, as described in Sec-
tion 5.2, Gaussian components may be better merged in another space, in which
the system requirements are easier to describe.

Typically there exists some structure in a map. By exploiting this structure in the
merging of the components, it is possible to maintain the information in the map,
while reducing the number of components needed to describe it. The algorithm
proposed in Section 5.2 is applied on the road map example in Section 5.3. The
model-based merging method exploits the maps structure to find a more efficient
representation of the posterior intensity in (5).

5.1 Background
The Gaussian mixture posterior intensity is given in (14). The aim of the merging
is to compute a reduced Gaussian mixture, with ` < Jk|k components, that is close
to the original intensity according to some distance measure d.

Some different methods for Gaussian mixture reduction by means of merging


components exists, and have mainly been proposed for Gaussian sum filters.
Alspach proposed a method to merge two Gaussian components with the same
covariance Sorenson and Alspach (1971). The joining algorithm, proposed by
Salmond Salmond (1990), evaluates the distances between every pair of compo-
nents and successively pairs those components with the smallest distance. To
avoid scaling problems, which may occur depending on the size of the elements
in the state vector, Mahalanobis distance measure is often used, see e.g., Bar-
Shalom et al. (2001).
5 Merging 207

180 180

160 160

140 140

120 120

100 100

80 80

60 60

40 40

20 20

0 0

−20 −20
20 0 −20 20 0 −20
(a) (b)

Figure 2: The four clusters, describing the road edges, are illustrated with
different colors, and can be compared with the vehicles driven path indicated
with black dots. The lines are a “snapshot” estimate, based on the Gaussian
components at one time step k. The drivers view is shown in Figure 5a and
5b for the two traffic scenarios in (a) and (b), respectively.
208 Paper D Road Intensity Based Mapping using Radar Measurements

The clustering algorithm, also presented in Salmond (1990), merges Gaussian


components in groups rather than in pairs. A cluster center (centroid) is cho-
sen as the component with the largest weight and the algorithm then merges all
surrounding components to the centroid. This process is repeated with the re-
maining components until all have been clustered. The distance measure which
is used to represent the closeness of a components i to the centroid j is given by
2 (j) (i) −1 

(i,j) wk wk 
(j) (i)
T 
(j) (j) (i)

dk = (j) (i)
mk − mk Pk mk − mk , (27)
wk + wk
where the distance is normalized to the covariance of the centroid. The factor
(j) (i) (j) (i)
wk wk /(wk + wk ) makes it easier to cluster small components while large com-
ponents retain as they are. The clustering algorithm is used in the original gm-
phd filter Vo and Ma (2006).
The joining and clustering algorithms are local methods in the sense that closely
lying individual components are merged without considering the total density.
During the last years some global methods, which take the whole Gaussian mix-
ture into account, were published. A cost-function based approach was presented
in Williams and Maybeck (2003), a global cluster approach in Schieferdecker and
Huber (2009) and a bottom-up approach which starts with one component and
successively builds up components to approximate the original mixture in Huber
and Hanebeck (2008). The global methods are typically more accurate at the ex-
pense of higher computational effort, and are therefore not a good choice for the
gm-phd filter.

5.2 Algorithm
Let us introduce an invertible, possibly nonlinear, mapping function T ( · ), which
transforms the Gaussian components µ, defined in (8), from the state space to the
merging space. The proposed algorithm consists of the following steps
(i) (i)
1. Transform µ̄k := T (µk ).
(i) (i)
2. Merge µ̄k using the cluster algorithm in Algorithm 10, resulting in µ̄˜ k .
(i) (i)
3. Inverse transform µ̃k := T −1 (µ̄˜ k ).
The merging algorithm and the transformation is exemplified on the the road
mapping example in the subsequent Section 5.3.

5.3 Road Mapping


The state space is spanned by the Cartesian world coordinate frame. However, for
road mapping it is advantageous to merge Gaussian components lying along the
road. Hence, the merging space is spanned by a road-aligned coordinate frame.
The shape of the road can be described by a polynomial according to (cf. (22))
c c
yE = ψRE xE + 0 (xE )2 + 1 (xE )3 , (28)
2 6
5 Merging 209

Algorithm 10 Merging Gaussian mixture by means of the cluster algorithm


k|k (i) J̄
Require: the Gaussian components {µ̄k }i=1 and the threshold δm
1: initiate: by setting ` = 0, I = {i = 1, . . . , Jk|k } and then
2: repeat
3: ` := ` + 1
(i)
4: j := arg max wk
( i∈I )
(i,j)
5: L := i ∈ I dk ≤ δm , where d (i,j) is given in (27)
(`) P (i)
6: w̃k = i∈L wk
(`) 1 P (i) (i)
7: m̃k = (`) i∈L wk mk
w̃k
(`) (i) (i) (`) (i) (`) (i)
P˜k = 1
wk (Pk + (m̃k − mk )(m̃k − mk )T )
P
8: (`) i∈L
w̃k
9: I := I\L
10: until I = ∅
(i)
11: return the Gaussian components {µ̄˜ k }`i=1

where xE and yE are expressed in the ego vehicle’s coordinate frame E. This model
was introduced by Dickmanns Dickmanns and Zapp (1986) and a neat feature of
the model is that the parameters can be interpreted as physical sizes. The angle
between the longitudinal axis of the vehicle and the road lane is ψRE , see Figure 3.
It is assumed that this angle is small and hence the approximation sin ψRE ≈ ψRE
is used. The curvature parameter is denoted c0 . The parameters can be estimated
using computer vision and a lane tracker, or taken from the joint clustering and
estimation method described in the previous section.

The non-linear transformation T from the world coordinate frame to h the road
iT
fixed coordinate frame is calculated in two steps. First the components xW yW ,
h iT
represented in the world coordinate frame W are transformed to xE yE , rep-
resented in the ego vehicle coordinate frame E, using
" E# " W# !
x 
WE T x

W
= R − d EW , (29)
yE yW
where the planar rotation matrix is given by
" #
WE cos ψE − sin ψE
R = , (30)
sin ψE cos ψE
and where ψE is the angle of rotation from W to E. This angle is referred to as
W
the yaw angle of the vehicle. In this work it is assumed that the position dEW and
orientation ψE of the ego vehicle is known. In the second step the components
210 Paper D Road Intensity Based Mapping using Radar Measurements

x̄R
z(i)

r (s)
xEs
δ
(s)
(s) Es
yEs
xE
y E ψE

ȳR E
dW
EW
W
yW
ψRE
xW

Figure 3: The world coordinate frame is denoted W , the ego vehicle’s coor-
dinate frame is denoted E and the curved road coordinate frame is denoted
R.

are transformed into the road aligned coordinate frame according to


" R# " #
x̄ xE
= . (31)
ȳR yE + ψRE xE + c20 (xE )2 + c61 (xE )3
Some other approximations of the transformation T are described in Eidehall
et al. (2007a). Since the transformation is nonlinear, the unscented transform, see
e.g., Julier and Uhlmann (2004), is used to propagate the Gaussian components
between the two spaces.

(j)
The covariance Pk in the distance measure (27) is artificially formed to make it
more likely for components lying along the road to be merged. A diagonal covari-
ance matrix Σ ∈ R2×2 is created, with its two elements well separated, resulting
in a pin-shaped covariance ellipse, and added to the state covariance according
to
(j) (j)
P¯ = P + Σ.
k k (32)
The merging is performed according to Algorithm 10, and the components are
transformed back to the world coordinate frame using the inverse transform T −1 ,
cf. Section 5.2. An example showing the components before and after the merg-
ing is illustrated in Figure 4. Compared with the camera view in Figure 5a, it is
clearly shown that the resulting components are following the road edge.
5 Merging 211

180 180

160 160

140 140

120 120

100 100

80 80

60 60

40 40

20 20

0 0

−20 −20
20 0 −20 20 0 −20
(a) Before Pruning and Merging (b) After Pruning and Merging

Figure 4: Figure (a) shows the Gaussian mixture before pruning and merging
and Figure (b) shows the reduced mixture. The illustrated road is estimated
from the camera image, and can be used as a reference here. This is the same
example as in Figure 5a-5c, where also a photo of the driver’s view is given.
212 Paper D Road Intensity Based Mapping using Radar Measurements

6 Road Mapping Example


In this section all parts of the road mapping example, which was introduced in
Section 2 and configured in the preceding sections, are put together. Especially
the spawning of new target is exemplified in Section 6.2. The application depen-
dent parts of gm-phd filter recursion are described in Section 6.3. The capability
of the posterior intensity to map the surroundings of the vehicle is shown in Sec-
tion 6.4. However, we start with an overview and definition of the system in
Section 6.1.

6.1 System Model

In this work, variables are defined from different perspectives. The measure-
ments are obtained by the sensor on the ego vehicle and are therefore expressed
in the ego vehicles coordinate frame, which is denoted E. The states, i.e., the po-
sition of the stationary objects, are expressed in a world fixed reference frame W .
(s)
The radars are mounted at the front bumper at the positions Es , for the sensors
s = 1, 2, 3.

The linear process model


mk+1 = mk + wk , wk ∼ N (0, Q), (33a)
of the Gaussian components allows for some motion, even though the objects are
stationary. The reason is that one Gaussian component covers many stationary
point sources and the mean value will change if sources are added or removed.
The nonlinear measurement equation
zk = h(mk ) + ek , ek ∼ N (0, R), (33b)
describes how a measurement (9) i.e., the range r, the range rate ṙ and the bearing
angle ψ relates to the Gaussian components mk . The nonlinear sensor model is
given by
r  2  2 
 (i) W (i) W 
 x − x (s) + y − y (s) 
 Es W Es W   
(i) W
 
  y −x (s) 


(i)  −v cos arctan Es W
ψ


  
h(xk ) =   
 x(i) −yW(s)
(s) 
Es  
 ,
 (34)
 E s W 
 (i) W
y −x (s) 

 Es W 
 arctan (i) W − ψ (s) 
x −y (s) Es

Es W
 T
W
where xE (s) W yW(s) is the position and ψ (s) the orientation of the radar sen-
s Es W Es
sor s = 1, 2, 3 in W , see Figure 3. The velocity of the ego vehicle is denoted v.
The range rate is the relative velocity between the source and the moving vehicle.
It is primarily included in the measurement vector to be used to sort stationary
sources from moving target in the gate process, cf. (39).
6 Road Mapping Example 213

6.2 Spawn Process

The left and right road edges are modeled with the polynomial (22) and with
the parameter vectors θ (1) and θ (2) which were estimated in Section 4.3. The
parameters θ (3) and θ (4) are assumed to belong to lines further away, e.g., the
other side of the road, and are therefore not considered here. The models are
used to create a prior intensity, which is a Gaussian mixture of the form (12)
along the road edges. The road model (22) is expressed in vehicle coordinates
and after having derived the Gaussian mixture, the components are transformed
into the world frame W .
n oJβ,k+1 /2
A number Jβ,k+1 /2 of xE -coordinates, denoted xE,(j) , are chosen in the
j=1
range from 0 to the maximum range of the radar sensor. The corresponding
n oJβ,k+1 /2 n oJβ,k+1
yE -coordinates yE,(j) and yE,(j) are derived by using the road
j=1 j=Jβ,k+1 /2+1
border model (22) and the parameters θ̂ (1) and θ̂ (2) , respectively. The coordi-
nates form the mean values of the position of the Gaussian components on the
road edges according to
E,(j) h iT
mβ,k+1 = xE,(j) yE,(j) . (35a)
The covariance of the Gaussian components is given by the diagonal matrix
 2 
E,(j)
 σ E 0 
 x
Pβ,k+1 =  2  , (35b)


 0 E E(j)
σy (x ) 
where it is assumed that the variance of the y-coordinate increases with the x-
distance, i.e., σyE (xE,(j) ) depends on xE(j) , to model the increased uncertainty in
the shape of the road at long distances.

So far the derivations are accomplished in the ego vehicles coordinate frame E,
but since the map is expressed in the world frame W , the components of the
Gaussian mixture are transformed into the world coordinate frame according to
(j) E,(j)
mβ,k+1 = RW E mβ,k+1 + dEW
W
, (36a)
(j) E,(j)  T
Pβ,k+1 = RW E Pβ,k+1 RW E , (36b)
(j)
to be used in (12). The weight wβ,k+1 represents the expected number of new
(j)
targets originating from mβ,k+1 .

6.3 GM-PHD Filter Recursion

The general gm-phd filter recursion was described in Section 3. Additional de-
tails for the road mapping example are given in this section. The merging of
Gaussian components, described in Section 5.3, is performed in each iteration.
214 Paper D Road Intensity Based Mapping using Radar Measurements

Prediction

The process model (33a) is linear and the Gaussian components in (11) are de-
rived using the Kalman filter prediction step according to
(i) (i)
mk+1|k = mk|k , (37a)
(i) (i)
Pk+1|k = Pk|k + Qk−1 . (37b)
The probability of survival is pS = 0.99.
Measurement Update

The measurement equation (34) is nonlinear and the unscented transform (UT)
is used to propagate the state variables through the measurement equation, see
e.g., Julier and Uhlmann (2004). A set of L sigma points and weights, denoted
 L
(`) (i) (i)
by χk , u (`) are generated from each Gaussian component N (xk|k−1 , Pk|k−1 )
`=0
using the method described in Julier and Uhlmann (2004). The sigma points are
transformed to the measurement space using (34) to obtain the propagated sigma
(`)
point ζk|k−1 and the first and second order moments of the measurement density
are approximated as
L
(i) (`)
X
ηk|k−1 = u (`) ζk|k−1 , (38a)
`=0
L   T
(i) (`) (i) (`) (i)
X
Sk|k−1 = u (`) ζk|k−1 − ηk|k−1 ζk|k−1 − ηk|k−1 + Rk , (38b)
`=0
L   T
(i) (`) (i) (`) (i)
X
Gk|k−1 = u (`) χk|k−1 − mk|k−1 ζk|k−1 − ηk|k−1 . (38c)
`=0

Note that these variables also are used to derive the updated weights in (16). The
Gaussian components are updated given the information from each new measure-
ment zk , which is in the gate
(  T  −1   )
(i) (i) (i)
Gi = zk zk − ηk|k−1 Sk|k−1 zk − ηk|k−1 < δG , (39)

for the gate threshold δG = 11.3, according to


 
(i) (i) (i) (i)
mk|k (z) = mk|k−1 + Kk zk − ηk|k−1 , (40a)
(i) (i) (i)
 T
Pk|k = Pk|k−1 − K (i) Sk|k−1 K (i) , (40b)
 −1
(i) (i)
K (i) = Gk|k−1 Sk|k−1 . (40c)

The updated components are together forming the updated intensity in (15).
The posterior intensity at time step k was given in (14). The probability of detec-
6 Road Mapping Example 215

tion is adjusted according to


(i)

pD,k ηk|k−1 ∈ Z,


pD,k (x) =  (i) (41)
0

ηk|k−1 < Z,

and the value is pD = 10−3 . Gaussian components which are outside field of view
of the sensor, i.e., the measurement space Z (s) , are hence not updated. The clutter
intensity is κ = 10−8 .
The measurements from the three radar sensors are fused by considering their
measurement times, synchronizing the data, and updating the filter accordingly.

6.4 Experiments and Results


The experiments were conducted with a prototype passenger car, equipped with
the radar sensors configured as shown in Figure 1. No reference data of the road
borders exist, but the vehicle’s position was recorded and may be used to illus-
trate that the resulting map is reasonable. One example of the estimated intensity
at a freeway traffic scenario is shown as a bird’s eye view in Figure 5c. Darker re-
gions illustrate higher concentrations of point sources, which in this figure stem
from the guardrails to the left and the right of the road. As expected, the path
of the ego vehicle, indicated by the black dots, is in between the two regions of
higher object concentration. The driver’s view is shown in Figure 5a.
A second example is shown in Figure 5d and 5b. Here, the freeway exit is clearly
visible in the intensity map, which shows that the proposed method to create
maps is very conformable.
The Gaussian components are generally removed from the filter when the vehicle
passed those parts of the map. However, to give a more comprehensive overview,
these components are stored and the resulting intensity based map is shown to-
gether with an occupancy grid map (ogm) and a flight photo in Figure 6. The
top figure is the map produced as described in this contribution. The ogm is
based on the same data set and used as a comparison of an existing and well es-
tablished algorithm, see e.g., Lundquist et al. (2009). The gray-level of the ogm
indicates the probability of occupancy, the darker the grid cell the more likely it
is to be occupied. As seen in the figure the road edges are not modeled as distinct
with the ogm. The ogm representation of the map is not very efficient, since
huge parts of the map are gray indicating that nothing is known about these ar-
eas. An ogm matrix with often more than 10000 elements must be updated and
communicated to other safety functions of a car at each time step. The compact
representation is an advantage of the intensity based map. Each Gaussian com-
ponents is parametrized with 7 scalar values according to (8). Since most maps
are modeled with 10 − 30 components it summarizes to around 70 − 210 scalar
values, which easily can be send on the vehicles can-bus to other safety func-
tions. Finally, the bottom photo is a very accurate flight photo (obtained from
the Swedish mapping, cadastral and land registration authority), which can be
used as ground truth to visualize the quality of the intensity based map. Note for
216 Paper D Road Intensity Based Mapping using Radar Measurements

(a) Camera view 1 (b) Camera view 2

(c) Intensity map 1 (d) Intensity map 2

Figure 5: The image in (a) shows the driver’s view of the intensity map in (c),
and the image in (b) is the driver’s view of the intensity map in (d). The
darker the areas in the intensity map, the higher the concentration of objects.
The drivers path is illustrated with black dots and may be used as a reference.
7 Conclusion 217

50

100

150
100 200 300 400 500 600 700 800 900 1000 1100
position [m]

Figure 6: The top figure shows the intensity based map obtained from radar
measurements collected on a highway (drivers view in Figure 5a). The ogm
in the middle figure serves as a comparison of an existing algorithm. The bot-
tom figure is a flight photo used as ground truth, where the driven trajectory
is illustrated with a dashed line (©Lantmäteriet Gävle 2010. Medgivande I
2010/1540, reprinted with permission).

example the junction at 1000 m.

7 Conclusion
In this work it is shown how the gm-phd filter can be used to create a map of
the environment using noisy point measurements. The map is represented by
the intensity, which describes the concentration of point sources. Methods to
identify and utilize typical map structures and to create an efficient prior in the
gm-phd-filter have been presented. The structures are identified by a clustering
algorithm which alternates between assigning existing map features to clusters,
and estimating structures within each cluster, until convergence. Furthermore,
these structures are also used to find an efficient representation of the resulting
map.
The usefulness of the proposed mapping approach is demonstrated using auto-
motive radar measurements collected on Swedish freeways. The resulting inten-
sity map of stationary objects, such as guardrails and lampposts, can be viewed as
a map over the concentration of radar reflectors. The parametrization is compact
and the map is easily sent over the can-bus to active safety functions, which can
use it to derive drivable trajectories with low probability of occupancy.
The intensity is a distribution and its graphical representation provides an intu-
itive description of the map, which is useful to decision making and path plan-
218 Paper D Road Intensity Based Mapping using Radar Measurements

ning algorithms. However, the intensity is an approximation of a true map. It


remains for future work to find an expression for the real map and also to de-
velop a metric to quantify the accuracy of the intensity map.
Bibliography 219

Bibliography
M. Adams, W. S. Wijesoma, and A. Shacklock. Autonomous navigation: Achieve-
ments in complex environments. IEEE Instrumentation & Measurement Mag-
azine, 10(3):15–21, June 2007.
T. Bailey and H. Durrant-Whyte. Simultaneous localization and mapping (SLAM):
Part II. IEEE Robotics & Automation Magazine, 13(3):108–117, September
2006.
Y. Bar-Shalom, X. Rong Li, and T. Kirubarajan. Estimation with Applications to
Tracking and Navigation. John Wiley & Sons, New York, NY, USA, 2001.
C. M. Bishop. Pattern recognition and machine learning. Springer, New York, NY,
USA, 2006. ISBN 0-387-31073-8.
M. Buehler, K. Iagnemma, and S. Singh, editors. Special Issue on the 2007 DARPA
Urban Challenge, Part I-III, volume 25 (8–10). Journal of Field Robotics, 2008.
D. Clark and B.-N. Vo. Convergence analysis of the Gaussian mixture PHD filter.
IEEE Transactions on Signal Processing, 55(4):1204–1212, April 2007.
D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes.
Vol. 1 , Elementary theory and method. Springer, New York, NY, USA, 2 edition,
2003.
E. D. Dickmanns and A. Zapp. A curvature-based scheme for improving road
vehicle guidance by computer vision. In Proceedings of the SPIE Conference
on Mobile Robots, volume 727, pages 161–198, Cambridge, MA, USA, 1986.
R. O. Duda, P. E. Hart, and D. G. Stork. Pattern classification. John Wiley & Sons,
New York, NY, USA, 2 edition, 2001.
H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping (SLAM):
Part I. IEEE Robotics & Automation Magazine, 13(2):99–110, June 2006.
A. Eidehall, J. Pohl, and F. Gustafsson. Joint road geometry estimation and vehicle
tracking. Control Engineering Practice, 15(12):1484–1494, December 2007a.
A. Eidehall, J. Pohl, F. Gustafsson, and J. Ekmark. Toward autonomous collision
avoidance by steering. IEEE Transactions on Intelligent Transportation Sys-
tems, 8(1):84–94, March 2007b.
A. Elfes. Sonar-based real-world mapping and navigation. IEEE Journal of
Robotics and Automation, 3(3):249–265, June 1987.
O. Erdinc, P. Willett, and Y. Bar-Shalom. The bin-occupancy filter and its con-
nection to the PHD filters. IEEE Transactions on Signal Processing, 57(11):
4232–4246, November 2009.
G. Ferrari-Trecate, M. Muselli, D. Liberati, and M. Morari. A clustering technique
for the identification of piecewise affine systems. Automatica, 39(2):205–217,
February 2003.
220 Paper D Road Intensity Based Mapping using Radar Measurements

A. Foessel-Bunting, J. Bares, and W. L. Whittaker. Three-dimensional map build-


ing with mmw radar. In R. Chatila A. Halme and E. Prassler, editors, Proceed-
ings of the International Conference on Field and Service Robotics, Helsinki,
Finland, June 2001.
K. Gilholm and D. Salmond. Spatial distribution model for tracking extended
objects. IEE Proceedings of Radar, Sonar and Navigation, 152(5):364–371, Oc-
tober 2005.
K. Gilholm, S. Godsill, S. Maskell, and D. Salmond. Poisson models for extended
target and group tracking. In Proceedings of Signal and Data Processing of
Small Targets, volume 5913, pages 230–241, San Diego, CA, USA, August 2005.
SPIE.
T. Hastie, R. Tibshirani, and J. H. Friedman. The elements of statistical learning :
data mining, inference, and prediction. Springer, New York, NY, USA, 2 edition,
2009.
M. F. Huber and U. D. Hanebeck. Progressive Gaussian mixture reduction. In
Proceedings of the International Conference on Information Fusion, pages 1–8,
Cologne, Germany, July 2008.
S. J. Julier and J. K. Uhlmann. Unscented filtering and nonlinear estimation. Pro-
ceedings of the IEEE, 92(3):401–422, March 2004.
K. Kaliyaperumal, S. Lakshmanan, and K. Kluge. An algorithm for detecting
roads and obstacles in radar images. IEEE Transactions on Vehicular Technol-
ogy, 50(1):170–182, January 2001.
A. Kirchner and C. Ameling. Integrated obstacle and road tracking using a laser
scanner. In Proceedings of the IEEE Intelligent Vehicles Symposium, pages
675–681, Dearborn, MI, USA, October 2000.
A. Kirchner and T. Heinrich. Model based detection of road boundaries with
a laser scanner. In Proceedings of the IEEE Intelligent Vehicles Symposium,
pages 93–98, Stuttgart, Germany, October 1998.
S. Lakshmanan, K. Kaliyaperumal, and K. Kluge. Lexluther: an algorithm for
detecting roads and obstacles in radar images. In Proceedings of the IEEE
Conference on Intelligent Transportation Systems, pages 415–420, Boston, MA,
USA, November 1997.
S. Lloyd. Least squares quantization in PCM. IEEE Transactions on Information
Theory, 28(2):129–137, March 1982.
C. Lundquist and T. B. Schön. Estimation of the free space in front of a moving
vehicle. In Proceedings of the SAE World Congress, SAE paper 2009-01-1288,
Detroit, MI, USA, April 2009.
C. Lundquist, T. B. Schön, and U. Orguner. Estimation of the free space in front of
a moving vehicle. Technical Report LiTH-ISY-R-2892, Department of Electrical
Engineering, Linköping University, Sweden, Linköping, Sweden, April 2009.
Bibliography 221

C. Lundquist, L. Danielsson, and F. Gustafsson. Random set based road mapping


using radar measurements. In Proceedings of the European Signal Processing
Conference, pages 219–223, Aalborg, Denmark, August 2010.
C. Lundquist, L. Hammarstrand, and F. Gustafsson. Road intensity based map-
ping using radar measurements with a probability hypothesis density filter.
IEEE Transactions on Signal Processing, 59(4):1397–1408, April 2011a.
C. Lundquist, U. Orguner, and F. Gustafsson. Extended target tracking using
polynomials with applications to road-map estimation. IEEE Transactions on
Signal Processing, 59(1):15–26, January 2011b.
B. Ma, S. Lakshmanan, and A. O. Hero. Simultaneous detection of lane and pave-
ment boundaries using model-based multisensor fusion. IEEE Transactions on
Intelligent Transportation Systems, 1(3):135–147, September 2000.
J. MacQueen. Some methods for classification and analysis of multivariate ob-
servations. In Proceedings of the Fifth Berkeley Symposium on Mathematical
Statistics and Probability, volume 1, pages 281–297, Berkeley, CA, USA, July
1967.
R. P. S. Mahler. A theoretical foundation for the Stein-Winter probability hy-
pothesis density (PHD) multitarget tracking approach. In Proceedings of the
national symposium on sensor data fusion, San Antonio, TX, USA, June 2000.
R. P. S. Mahler. Multitarget Bayes filtering via first-order multitarget moments.
IEEE Transactions on Aerospace and Electronic Systems, 39(4):1152–1178, Oc-
tober 2003.
R. P. S. Mahler. Statistical Multisource-Multitarget Information Fusion. Artech
House, Boston, MA, USA, 2007.
H. Moravec. Sensor fusion in certainty grids for mobile robots. AI Magazine, 9
(2):61–74, 1988. ISSN 0738-4602.
J. Mullane, B.-N. Vo, M. D. Adams, and W. S. Wijesoma. A random set for-
mulation for Bayesian SLAM. In Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems, pages 1043–1049, Nice, France,
September 2008.
J. Mullane, M. D. Adams, and W. S. Wijesoma. Robotic mapping using measure-
ment likelihood filtering. The International Journal of Robotics Research, 28
(2):172–190, 2009. ISSN 0278-3649.
M. Nikolova and A Hero. Segmentation of a road from a vehicle-mounted radar
and accuracy of the estimation. In Proceedings of the IEEE Intelligent Vehicles
Symposium, pages 284–289, Dearborn, MI, USA, October 2000.
V. Petridis and A. Kehagias. Identification of switching dynamical systems us-
ing multiple models. In Proceedings of the IEEE Conference on Decision and
Control, volume 1, pages 199–204, Tampa, Florida, USA, December 1998.
222 Paper D Road Intensity Based Mapping using Radar Measurements

J. Roll. Local and Piecewise Affine Approaches to System Identification. PhD the-
sis No 802, Linköping Studies in Science and Technology, Linköping, Sweden,
April 2003.
D. J. Salmond. Mixture reduction algorithms for target tracking in clutter. Pro-
ceedings of Signal and Data Processing of Small Targets, 1305(1):434–445, Jan-
uary 1990.
D. Schieferdecker and M.F. Huber. Gaussian mixture reduction via clustering.
In Proceedings of the International Conference on Information Fusion, pages
1536–1543, Seattle, WA, USA, July 2009.
H. Sidenbladh. Multi-target particle filtering for the probability hypothesis den-
sity. In Proceedings of the International Conference on Information Fusion,
volume 2, pages 800–806, Cairns, Australia, March 2003.
R. Smith, M. Self, and P. Cheeseman. A stochastic map for uncertain spatial
relationships. In Proceedings of the International Symposium on Robotics Re-
search, pages 467–474, Cambridge, MA, USA, 1988. MIT Press.
H. W. Sorenson and D. L. Alspach. Recursive Bayesian estimation using Gaussian
sum. Automatica, 7:465–479, July 1971.
J. Sparbert, K. Dietmayer, and D. Streller. Lane detection and street type clas-
sification using laser range images. In Proceedings of the IEEE Conference on
Intelligent Transportation Systems, pages 454–459, Oakland, CA, USA, August
2001.
S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. The MIT Press, Cam-
bridge, MA, USA, 2005.
B.-N. Vo and W.-K. Ma. The Gaussian mixture probability hypothesis density
filter. IEEE Transactions on Signal Processing, 54(11):4091–4104, November
2006.
B.-N. Vo, S. Singh, and A. Doucet. Random finite sets and sequential Monte Carlo
methods in multi-target tracking. In Proceedings of the International Radar
Conference, pages 486–491, Adelaide, Australia, September 2003.
J. L. Williams and P. S. Maybeck. Cost-function-based Gaussian mixture reduc-
tion for target tracking. In Proceedings of the International Conference on
Information Fusion, volume 2, pages 1047–1054, Cairns, Australia, July 2003.
T. Zajic and R. P. S. Mahler. Particle-systems implementation of the PHD
multitarget-tracking filter. In Signal Processing, Sensor Fusion, and Target
Recognition XII, volume 5096, pages 291–299, Orlando, FL, USA, April 2003.
SPIE.
Paper E
Extended Target Tracking Using a
Gaussian-Mixture PHD Filter

Authors: Karl Granström, Christian Lundquist and Umut Orguner

Edited version of the paper:


K. Granström, C. Lundquist, and U. Orguner. Extended target tracking
using a Gaussian-mixture PHD filter. IEEE Transactions on Aerospace
and Electronic Systems, 2011a. Under review.

The paper presents data that was previously published in:


K. Granström, C. Lundquist, and U Orguner. A Gaussian mixture PHD
filter for extended target tracking. In Proceedings of the International
Conference on Information Fusion, Edinburgh, UK, July 2010.

Preliminary version:
Technical Report LiTH-ISY-R-3028, Dept. of Electrical Engineering, Lin-
köping University, SE-581 83 Linköping, Sweden.
Extended Target Tracking Using a
Gaussian-Mixture PHD Filter

Karl Granström, Christian Lundquist and Umut Orguner

Dept. of Electrical Engineering,


Linköping University,
SE–581 83 Linköping, Sweden
[email protected], [email protected], [email protected]

Abstract
This paper presents a Gaussian-mixture implementation of the phd
filter for tracking extended targets. The exact filter requires process-
ing of all possible measurement set partitions, which is generally in-
feasible to implement. A method is proposed for limiting the number
of considered partitions and possible alternatives are discussed. The
implementation is used on simulated data and in experiments with
real laser data, and the advantage of the filter is illustrated. Suitable
remedies are given to handle spatially close targets and target occlu-
sion.

1 Introduction
In most multi-target tracking applications it is assumed that each target produces
at most one measurement per time step. This is true for cases when the distance
between the target and the sensor is large in comparison to the target’s size. In
other cases however, the target size may be such that multiple resolution cells
of the sensor are occupied by the target. Targets that potentially give rise to
more than one measurement per time step are categorized as extended. Examples
include the cases when vehicles use radar sensors to track other road-users, when
ground radar stations track airplanes which are sufficiently close to the sensor, or
in mobile robotics when pedestrians are tracked using laser range sensors.
Gilholm and Salmond Gilholm and Salmond (2005) have presented an approach
for tracking extended targets under the assumption that the number of received
target measurements in each time step is Poisson distributed. Their algorithm
was illustrated with two examples where point targets which may generate more
than one measurement and objects that have a 1-D extension (infinitely thin stick
of length l) are tracked. In Gilholm et al. (2005) a measurement model was sug-
gested which is an inhomogeneous Poisson point process. At each time step, a
Poisson distributed random number of measurements are generated, distributed
around the target. This measurement model can be understood to imply that the

225
226 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

extended target is sufficiently far away from the sensor for its measurements to
resemble a cluster of points, rather than a geometrically structured ensemble. A
similar approach is taken in Boers et al. (2006) where track-before-detect theory
is used to track a point target with a 1-D extent.

Using the rigorous finite set statistics (fisst), Mahler has pioneered the recent
advances in the field of multiple target tracking with a set theoretic approach
where the targets and measurements are treated as random finite sets (rfs). This
type of approach allows the problem of estimating multiple targets in clutter and
uncertain associations to be cast in a Bayesian filtering framework Mahler (2007),
which in turn results in an optimal multi-target Bayesian filter. As is the case in
many nonlinear Bayesian estimation problems, the optimal multi-target Bayesian
filter is infeasible to implement except for simple examples and an important
contribution of fisst is to provide structured tools in the form of the statistical
moments of a rfs. The first order moment of a rfs is called probability hypoth-
esis density (phd), and it is an intensity function defined over the state space of
the targets. The so called phd filter Mahler (2003, 2007) propagates in time phds
corresponding to the set of target states as an approximation of the optimal multi-
target Bayesian filter. A practical implementation of the phd filter is provided by
approximating the phds with Gaussian-mixtures (gm) Vo and Ma (2006) which
results in the Gaussian-mixture phd (gm-phd) filter. In the recent work Mahler
(2009), Mahler presented an extension of the phd filter to also handle extended
targets of the type presented in Gilholm et al. (2005).

In this paper, we present a Gaussian-mixture implementation of the phd-filter


for extended targets Mahler (2009), which we call the extended target gm-phd-
filter (et-gm-phd). In this way, we, to some extent, give a practical extension of
the series of work in Gilholm et al. (2005); Vo and Ma (2006); Mahler (2009). An
earlier version of this work was presented in Granström et al. (2010) and the cur-
rent, significantly improved, version includes also practical examples with real
data. For space considerations, we do not repeat the derivation of the phd-filter
equations for extended targets and instead refer the reader to Mahler (2009).

The document is outlined as follows. The multiple extended target tracking prob-
lem is defined in Section 2. The details of the Gaussian-mixture implementation
are given in Section 3. For the measurement update step of the et-gm-phd-filter,
different partitions of the set of measurements have to be considered. A measure-
ment clustering algorithm used to reduce the combinatorially exploding num-
ber of possible measurement partitions is described in Section 4. The proposed
approaches are evaluated using both simulations and experiments. The target
tracking setups for these evaluations are described in Section 5, the simulation
results are presented in Section 6 and results using data from a laser sensor are
presented in Section 7. Finally, Section 8 contains conclusions and thoughts on
future work.
2 Target Tracking Problem Formulation 227

2 Target Tracking Problem Formulation


In previous work, extended targets have often been modeled as targets having
a spatial extension or shape that would lead to multiple measurements, as op-
posed to at most a single measurement. On the other hand, the extended target
tracking problem can be simplified by the assumption that the measurements
originating from a target are distributed approximately around a target reference
point Gilholm and Salmond (2005) which can be e.g. the centroid or any other
point depending on the extent (or the shape) of the target. Though all targets
obviously have a spatial extension and shape, in the latter type of modeling, only
the target reference point is important and the target extent does not need to be
estimated.

The relevant target characteristics that are to be estimated form the target’s state
vector x. Generally, beside the kinematic variables as position, velocity and ori-
entation, the state vector may also contain information about the target’s spatial
extension. As mentioned above, when the target’s state does not contain any vari-
ables related to the target extent, though the estimation is done as if the target
was a point (i.e. the target reference point), the algorithms should still take care
of the multiple measurements that originate from a target. Hence, in this study,
we use a generalized definition of an extended target, given below, which does
not depend on whether the target extent is estimated or not.

1 Definition (Extended Target). A target which potentially gives rise to more


than one measurement per time step irrespective of whether the target’s extent is
explicitly modeled (and/or estimated) or not.

In this work, to simplify the presentation, no information about the size and
shape of the target is kept in the state vector x, i.e. the target extent is not ex-
plicitly estimated. Nevertheless, it must be emphasized that this causes no loss
of generality as shown by the recent work Granström et al. (2011b) where the re-
sulting et-gm-phd filter is used to handle the joint estimation of size, shape and
kinematic variables for rectangular and elliptical extended targets. We model
both the target states to be estimated, and the measurements collected, as rfss .
The motivation behind this selection is two-fold. First, in many practical systems,
although the sensor reports come with a specific measurement order, the results
of the target tracking algorithms are invariant under the permutations of this or-
der. Hence, modeling the measurements as the elements of a set in which the
order of the elements is irrelevant makes sense. Second, this work unavoidably
depends on the previous line of work Mahler (2009), which is based on such a
selection.

The initial gm-phd work Vo and Ma (2006) does not provide tools for ensuring
track continuity, for which some remedies are described in the literature, see
e.g. Panta et al. (2009). It has however been shown that labels for the Gaussian
components can be included into the filter in order to obtain individual target
tracks, see e.g. Clark et al. (2006). In this work, for the sake of simplicity, labels
228 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

are not used, however they can be incorporated as in Clark et al. (2006) to provide
track continuity.
We denote the unknown number of targets as Nx,k , and the set of target states to
x,k (i) N
be estimated at time k is Xk = {xk }i=1 . The measurement set obtained at time k
(i) Nz,k
is Zk = {zk }i=1 where Nz,k is the number of the measurements.
(i)
The dynamic evolution of each target state xk in the rfs Xk is modeled using a
linear Gaussian dynamical model,
(i) (i) (i)
xk+1 = Fk xk + Gk wk , (1)
(i) (i)
for i = 1, . . . , Nx,k , where wk is Gaussian white noise with covariance Qk . Note
that each target state evolves according to the same dynamic model independent
of the other targets.
The number of measurements generated by the ith  target
 at each time step is
(i)
a Poisson distributed random variable with rate γ xk measurements per scan,
where γ( · ) is a known non-negative function defined over the target state space.
The probability of the ith target generating at least one measurement is then
given as
 
(i)
−γ xk
1−e . (2)
 
(i)
The ith target is detected with probability pD x k where pD ( · ) is a known non-
negative function defined over the target state space. This gives the effective
probability of detection
  
(i)  
1 − e−γ xk  pD x(i) .
 
  k (3)

The measurements originating from the ith target are assumed to be related to
the target state according to a linear Gaussian model given as
(j) (i) (j)
zk = Hk xk + ek , (4)
(j)
where ek is white Gaussian noise with covariance Rk . Each target is assumed to
give rise to measurements independently of the other targets. We here emphasize,
that in an rfs framework both the set of measurements Zk and the set of target
states Xk are unlabeled, and hence no assumptions are made regarding which
target gives rise to which measurement.
The number of clutter measurements generated at each time step is a Poisson dis-
tributed random variable with rate βFA,k clutter measurements per surveillance
volume per scan. Thus, if the surveillance volume is Vs , the mean number of
clutter measurements is βFA,k Vs clutter measurements per scan. The spatial dis-
tribution of the clutter measurements is assumed uniform over the surveillance
3 Gaussian-Mixture Implementation 229

volume.
The aim is now to obtain an estimate of the sets of the target states XK = {Xk }K k=1
given the sets of measurements ZK = {Zk }K k=1 . We achieve this by propagating the
predicted and updated phds, denoted Dk|k−1 ( · ) and Dk|k ( · ), respectively, of the
set of target states Xk , using the phd filter presented in Mahler (2009).

3 Gaussian-Mixture Implementation
In this section, following the derivation of the gm-phd-filter for standard sin-
gle measurement targets in Vo and Ma (2006), a phd recursion for the extended
target case is described. Since the prediction update equations of the extended
target phd filter are the same as those of the standard PHD filter Mahler (2009),
the Gaussian mixture prediction update equations of the et-gm-phd filter are the
same as those of the standard gm-phd filter in Vo and Ma (2006). For this rea-
son, here we only consider the measurement update formulas for the et-gm-phd
filter.
The predicted phd has the following Gaussian-mixture representation
JX
k|k−1  
(j) (j) (j)
Dk|k−1 (x) = wk|k−1 N x | mk|k−1 , Pk|k−1 (5)
j=1

where
• Jk|k−1 is the predicted number of components;
(j)
• wk|k−1 is the weight of the jth component;
(j) (j)
• mk|k−1 and Pk|k−1 are the predicted mean and covariance of the jth compo-
nent;
• the notation N (x | m, P ) denotes a Gaussian distribution defined over the
variable x with mean m and covariance P .
The phd measurement update equation for the extended target Poisson model
of Gilholm et al. (2005) was derived in Mahler (2009). The corrected phd-intensity
is given by the multiplication of the predicted phd and a measurement pseudo-
likelihood function Mahler (2009) LZk as
Dk|k (x|Z) = LZk (x) Dk|k−1 (x|Z) . (6)
The measurement pseudo-likelihood function LZk in (6) is defined as
 
LZk (x) , 1 − e−γ(x) pD (x) + e−γ(x) pD (x)
X X γ (x)|W | Y φz (x)
k
× ωp · . (7)
dW λk ck (zk )
p∠Zk W ∈p zk ∈W
230 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

where

• λk , βFA,k Vs is the mean number of clutter measurements;

• ck (zk ) = 1/Vs is the spatial distribution of the clutter over the surveillance
volume;

• the notation p∠Zk means that p partitions the measurement set Zk into non-
empty cells W ;

• the quantities ωp and dW are non-negative coefficients defined for each par-
tition p and cell W respectively.

• φzk (x) = p(zk |x) is the likelihood function for a single target generated mea-
surement, which would be a Gaussian density in this work.

The first summation on the right hand side of (7) is taken over all partitions p of
the measurement set Zk . The second summation is taken over all cells W in the
current partition p.

In order to derive the measurement update of the gm-phd-filter, six assumptions


were made in Vo and Ma (2006), which are repeated here for the sake of complete-
ness.
2 Assumption. Each target evolves and generates observations independently
of one another.

3 Assumption. Clutter is Poisson and independent of target-originated mea-


surements.

4 Assumption. The predicted multi-target rfs is Poisson.

5 Assumption. Each target follows a linear Gaussian dynamical model, cf. (1),
and the sensor has a linear Gaussian measurement model, cf. (4).

6 Assumption. The survival and detection probabilities are state independent,


i.e. pS (x) = pS and pD (x) = pD .

7 Assumption. The intensities of the birth and spawn rfs are Gaussian-mixtures.

In this paper we adopt all of the above assumptions except that we relax the
assumption on detection probability as follows.
8 Assumption. The following approximation about probability of detection func-
tion pD ( · ) holds.
     
(j) (j) (j) (j) (j)
pD (x) N x | mk|k−1 , Pk|k−1 ≈ pD mk|k−1 N x | mk|k−1 , Pk|k−1 (8)

for all x and for j = 1, . . . , Jk|k−1 .


3 Gaussian-Mixture Implementation 231

Assumption 8 is weaker than Assumption 6 in that (8) is trivially satisfied when


pD ( · ) = pD , i.e. when pD ( · ) is constant. In general, Assumption 8 holds approx-
imately when the function pD ( · ) does not vary much in the uncertainty zone
(j)
of a target determined by the covariance Pk|k−1 . This is true either when pD ( · )
is a sufficiently smooth function or when the signal to noise ratio (snr) is high
(j)
enough such that Pk|k−1 is sufficiently small. We still note here that Assumption 8
is only for the sake of simplification rather than approximation, since pD (x) can
always be approximated as a mixture of exponentials of quadratic functions (or
equivalently as Gaussians) without losing the Gaussian-mixture structure of the
corrected phd, see Vo and Ma (2006). This, however, would cause a multiplica-
tive increase in the number of components in the corrected phd, which would in
turn make the algorithm need more aggressive pruning and merging operations.
A similar approach to variable probability of detection has been taken in order to
model the clutter notch in ground moving target indicator target tracking Ulmke
et al. (2007).

For the expected number of measurements from the targets, represented by γ( · ),


similar remarks apply and we use the following assumption.
9 Assumption. The following approximation about γ( · ) holds
 
  (j)    
−γ(x) n (j) (j) −γ mk|k−1 n (j) (j) (j)
e γ (x)N x | mk|k−1 , Pk|k−1 ≈ e γ mk|k−1 N x | mk|k−1 , Pk|k−1 (9)

for all x, j = 1, . . . , Jk|k−1 and n = 1, 2, . . ..

The trivial situation γ( · ) = γ, i.e. when γ( · ) is constant, is again a special case


where Assumption 9 is satisfied. In general, satisfying Assumption 9 is more
difficult than Assumption 8 and a Gaussian mixture assumption for γ( · ) would
not work due to the exponential function. Nevertheless Assumption 9 is expected
to hold approximately either when γ ( · ) is a sufficiently smooth function or when
(j)
the signal to noise ratio (snr) is high enough such that Pk|k−1 is sufficiently small.

With the assumptions presented above, the posterior intensity at time k is a


Gaussian-mixture given by
X X
ND D
Dk|k (x) = Dk|k (x) + Dk|k (x, W ). (10)
p∠Zk W ∈p
ND
The Gaussian-mixture Dk|k ( · ), handling the no detection cases, is given by
JX
k|k−1  
ND (j) (j) (j)
Dk|k (x) = wk|k N x | mk|k , Pk|k , (11a)
j=1
   
(j) (j) (j) (j)
wk|k = 1 − 1 − e−γ pD wk|k−1 , (11b)
(j) (j) (j) (j)
mk|k = mk|k−1 , Pk|k = Pk|k−1 . (11c)
232 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

   
(j) (j) (j)
where we used the short hand notations γ (j) and pD for γ mk|k−1 and pD mk|k−1
respectively.

D
The Gaussian-mixture Dk|k (x, W ), handling the detected target cases, is given by

JX
k|k−1  
D (j) (j) (j)
Dk|k (x, W ) = wk|k N x | mk|k , Pk|k , (12a)
j=1
(j)
(j) Γ (j) pD (j) (j)
wk|k = ωp Φ W wk|k−1 , (12b)
dW
(j)  |W |
Γ (j) = e−γ γ (j) , (12c)
(j) (j)
Y 1
ΦW = φW , (12d)
λk ck (zk )
zk ∈W

where the product is over all measurements zk in the cell W and |W | is the num-
(j)
ber of elements in W . The coefficient φW is given by
 
(j) (j) (j)
φW = N zW | HW mk|k−1 , HW Pk|k−1 HTW + RW (12e)

and is calculated using


M
zW , zk , HW = [HkT , HkT , · · · , HkT ]T ,
zk ∈W | {z }
|W | times
RW =blkdiag(Rk , Rk , · · · , Rk ).
| {z }
|W | times
L
The operation denotes vertical vectorial concatenation. The partition weights
ωp can be interpreted as the probability of the partition p being true and are
calculated as
Q
W ∈p dW
ωp = P Q , (12f)
p0 ∠Zk W 0 ∈p0 dW 0
JX
k|k−1
(`) (`) (`)
dW = δ|W |,1 + Γ (`) pD Φ W wk|k−1 , (12g)
`=1
where δi,j is the Kronecker delta. The mean and covariance of the Gaussian com-
ponents are updated using the standard Kalman measurement update,
 
(j) (j) (j) (j)
mk|k = mk|k−1 + Kk zW − HW mk|k−1 , (13a)
 
(j) (j) (j)
Pk|k = I − Kk HW Pk|k−1 , (13b)
4 Partitioning the Measurement Set 233

 −1
(j) (j) (j)
Kk = Pk|k−1 HTW HW Pk|k−1 HTW + RW . (13c)

In order to keep the number of Gaussian components at a computationally tractable


level, pruning and merging is performed as in Vo and Ma (2006).

4 Partitioning the Measurement Set


As observed in the previous section, an integral part of extended target tracking
with the phd filter is the partitioning of the set of measurements Mahler (2009).
The partitioning is important, since more than one measurement can stem from
the same target. Let us exemplify1 the process of partitioning with a measure-
n (1) (2) (3) o
ment set containing three individual measurements, Zk = zk , zk , zk . This set
can be partitioned in the following different ways;
n (1) (2) (3) o
p1 : W11 = zk , zk , zk ,
n (1) (2) o n (3) o
p2 : W12 = zk , zk , W22 = zk ,
n (1) (3) o n (2) o
p3 : W13 = zk , zk , W23 = zk ,
n (2) (3) o n (1) o
p4 : W14 = zk , zk , W24 = zk ,
n (1) o n (2) o n (3) o
p5 : W15 = zk , W25 = zk , W35 = zk .

Here, pi is the ith partition, and Wji is the jth cell of partition i. Let |pi | denote the
number of cells in the partition, and let |Wji | denote the number of measurements
in the cell. It is quickly realized that as the size of the measurement set increases,
the number of possible partitions grows very large. In order to have a computa-
tionally tractable target tracking method, only a subset of all possible partitions
can be considered. In order to achieve good extended target tracking results, this
subset of partitions must represent the most likely ones of all possible partitions.

In Section 4.1, we propose a simple heuristic for finding this subset of partitions,
which is based on the distances between the measurements. We here note that
our proposed method is only one instance of a vast number of other clustering
algorithms found in the literature, and that other methods could have been used.
Some well-known alternatives are pointed out, and compared to the proposed
partitioning method, in Section 4.2. A modification of the partitioning approach
to better handle targets which are spatially close is described in Section 4.3.

4.1 Distance Partitioning


N
Consider a set of measurements Z = {z(i) }i=1
z
. Our partitioning algorithm relies
on the following theorem.

1 This example was also utilized in Mahler (2009).


234 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

10 Theorem. Let d( · , · ) be a distance measure and d` ≥ 0 be an arbitrary dis-


tance threshold. Then there is one and only one partition in which any pair of
measurements z(i) , z(j) ∈ Z that satisfy
 
d z(i) , z(j) ≤ d` (15)
are in the same cell.

Proof: The proof is given in Appendix A.1 for the sake of clarity.

Given a distance measure d( · , · ), the distances between each pair of measure-


ments can be calculated as
∆ij , d(z(i) , z(j) ), for 1 ≤ i , j ≤ Nz . (16)
Theorem 10 says that there is a unique partition that leaves all pairs (i, j) of mea-
surements satisfying ∆ij ≤ d` in the same cell. We give an example algorithm
that can be used to obtain this unique partition in Algorithm 11. We use this
algorithm to generate Nd alternative partitions of the measurement set Z, by se-
lecting Nd different thresholds
N d
{d` }`=1 , d` < d`+1 , for ` = 1, . . . , Nd − 1. (17)
The alternative partitions contain fewer cells as the d` ’s are increasing, and the
cells typically contain more measurements.
Nd
The thresholds {d` }`=1 are selected from the set
D , {0} ∪ {∆ij |1 ≤ i < j ≤ Nz } (18)
where the elements of D are sorted in ascending order. If one uses all of the
elements in D to form alternative partitions, |D| = Nz (Nz − 1)/2 + 1 partitions are
obtained. Some partitions resulting from this selection might still turn out to be
identical, and must hence be discarded so that each partition at the end is unique.
Among these unique partitions, the first (corresponding to the threshold d1 = 0)
would contain Nz cells with one measurement each. The last partition would
have just one cell containing all Nz measurements. Notice that this partitioning
methodology already reduces the number of partitions tremendously.
The smallest and the largest thresholds in the set D on the other hand can still
contain very similar values due to the fact that the measurements are generally
clustered around the targets. In order to further reduce the computational load,
partitions in this work are computed only for a subset of thresholds in the set
D. This subset is determined based on the statistical properties of the distances
between the measurements belonging to the same target.
Suppose we select the distance measure d( · , · ) as the Mahalanobis distance, given
by
  r T  
(i) (j) (i) (j) (i) (j)
dM zk , zk = zk − zk R−1
k z k − z k . (19)
4 Partitioning the Measurement Set 235

Algorithm 11 Distance Partitioning


Require: d` , ∆i,j , 1 ≤ i , j ≤ Nz .
1: CellNumber(i) = 0, 1 ≤ i ≤ Nz {Set cells of all measurements to null}
2: CellId = 1 {Set the current cell id to 1}
%Find all cell numbers
3: for i = 1 : Nz do
4: if CellNumbers(i) = 0 then
5: CellNumbers(i) = CellId
6: CellNumbers = FindNeighbors(i,CellNumbers,CellId)
7: CellId = CellId+1
8: end if
9: end for

The recursive function FindNeighbors( · , · , · ) is given as


1: function CellNumbers = FindNeighbors(i,CellNumbers,CellId)
2: for j = 1 : Nz do
3: if j , i & ∆ij ≤ d` & CellNumbers(j) = 0 then
4: CellNumbers(j) = CellId
5: CellNumbers = FindNeigbors(j,CellNumbers,CellId)
6: end if
7: end for

(i) (j)
Then, for two target-originated measurements zk and zk belonging to the same
 
(i) (j)
target, dM zk , zk is χ2 distributed with degrees of freedom equal to the mea-
surement vector dimension. Using the inverse cumulative χ2 distribution func-
tion, which we denote here as invchi2( · ), a unitless distance threshold δPG can be
computed as
δPG = invchi2(PG ), (20)
for a given probability PG . We have seen in early empirical simulations that good
target tracking results are achieved with partitions computed using the subset
of distance thresholds in D satisfying the condition δPL < d` < δPU for lower
probabilities PL ≤ 0.3 and upper probabilities PU ≥ 0.8.

As a simple example, if there are four targets present, each with expected number
of measurements 20, and clutter measurements are generated with βFA Vs = 50,
then the mean number of measurements collected each time step would be 130.
For 130 measurements, the number of all possible partitions is given by the Bell
number B130 ∝ 10161 Rota (1964). Using all of the thresholds in the set D, 130
different partitions would be computed on average. Using the upper and lower
probabilities PL = 0.3 and PU = 0.8, Monte Carlo simulations show that on aver-
age only 27 partitions are computed, representing a reduction of computational
complexity several orders of magnitude.
236 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

4.2 Alternative Partitioning Methods


An alternative to using the proposed algorithm is to use a method which takes
as input the final desired number of cells, denoted K, and then divides the set
of measurements into K cells. The perhaps most well known example of such
a method is K-means clustering, see e.g. the the textbooks Bishop (2006); Hastie
et al. (2009). In the et-gm-phd-filter, one needs to generate alternative partitions,
corresponding to values of K between a lower and an upper threshold, denoted
KL and KU . While the values for the partitioning parameters δPL and δPU in dis-
tance partitioning can be chosen using some intuitive arguments as above, it is
less clear how KL and KU should be selected. One idea is to set KL = 1 and
KU = |Zk |, which corresponds to δPU = ∞ and δPL = 0 in Distance Partitioning.
Doing so would significantly increase the computational complexity compared
to Distance Partitioning, since a considerably higher number of partitions must
be considered.

Another major difference between the suggested distance partitioning and K-


means clustering is highlighted in Figure 1, which shows a measurement set that
consists of Nz,k = 13 measurements, 10 of which are clustered in the northeast of
the surveillance region and the other three are scattered individually. The intu-
itive way to cluster this set of measurements is into 4 clusters, which is achieved
by Distance Partitioning using a distance threshold of about 25 m, as shown in
the left plot of Figure 1. When there is a large number of measurements concen-
trated in one part of the surveillance area, as is the case in this example, K-means
clustering tends to split those measurements into smaller cells, and merge re-
maining but far away measurements into large cells which is illustrated in the
right plot of Figure 1.

The main reason behind this shortcoming of K-means is the initialization of the
algorithm, where the initial cluster centers are chosen by uniform sampling. In or-
der to overcome this problem, modifications to the standard K-means algorithm
have been suggested, where initial clusters are chosen as separated as possible,
see Arthur and Vassilvitskii (2007); Ostrovsky et al. (2006). This improved ver-
sion of K-means is called K-means++.

In simulations, Distance Partitioning was compared to K-means (using Mat-


lab’s kmeans) and K-means++ (using an implementation available online Sorber
(2011)). The results show that K-means++ in fact outperforms K-means, however
it still fails to compute the correct partitions much too often, except in scenarios
with very low βFA,k . This can be attributed to the existence of counter-intuitive lo-
cal optima for the implicit cost function involved with K-means (or K-means++).
Distance Partitioning on the other hand can handle both high and low βFA,k and
always gives an intuitive and unique partitioning for a given d` .

Therefore, we argue that a hierarchical method, such as the suggested distance


partitioning, should be preferred over methods such as K-means. However, it is
important to note here again, that regarding partitioning of the measurement set,
the contribution of the current work lies mainly not in the specific method that
4 Partitioning the Measurement Set 237

600 600

400 400

200 200
Y [m]

Y [m]
0 0

−200 −200

−400 −400

−1000 −500 0 −1000 −500 0


X [m] X [m]

Figure 1: Set of Nz,k = 13 measurements. Left: The measurements par-


titioned using the suggested distance partitioning method with a distance
threshold of 25 m. Right: The measurements partitioned using K-means
clustering with K = 4.

is suggested, but more in showing that all possible partitions can efficiently be
approximated using a subset of partitions.

4.3 Sub-Partitioning
Initial results using et-gm-phd showed problems with underestimation of tar-
get set cardinality in situations where two or more extended targets are spatially
close Granström et al. (2010). The reason for this is that when targets are spa-
tially close, so are their resulting measurements. Thus, using Distance Partition-
ing, measurements from more than one measurement source will be included in
the same cell W in all partitions p, and subsequently the et-gm-phd filter will
interpret measurements from multiple targets as having originated from just one
target. In an ideal situation, where one could consider all possible partitions of
the measurement set, there would be alternative partitions which would contain
the subsets of a wrongly merged cell. Such alternative partitions would dominate
the output of the et-gm-phd filter towards the correct estimated number of tar-
gets. Since we eliminate such partitions completely using Distance Partitioning,
the et-gm-phd filter lacks the means to correct its estimated number of targets.
One remedy for this problem is to form additional sub-partitions after perform-
ing Distance Partitioning, and to add them to the list of partitions that et-gm-
phd filter considers at the current time step. Obviously, this should be done only
when there is a risk of having merged the measurements belonging to more than
one target, which can be decided based on the expected number of measurements
originating from a target. We propose the following procedure for the addition
of the sub-partitions.
Suppose that we have computed a set of partitions using Distance Partitioning, cf.
the algorithm in Algorithm 11. Then, for each partition generated by the Distance
j
Partitioning, say for pi , we calculate the maximum likelihood (ml) estimates N̂x
238 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

Algorithm 12 Sub-Partition
n o
Require: Partitioned set of measurements Zp = p1 , . . . , pNp , where Np is the
number of partitions.
1: Initialise: Counter for new partitions ` = Np .
2: for i = 1, . . . , Np do
3: for j = 1, . . . , |pi | do
 
j j
4: N̂x = arg max p Wji Nx = n
n
j
5: if N̂x > 1 then
6: ` = ` + 1 {Increase the partition counter}
7: p` = pi \Wji {Current partition except the current cell}
n oN̂xj 
j

8: Wk+ = split N̂x , Wji {Split the current cell}
k=1
n oN̂xj
9: p` = p` ∪ Wk+ {Augment the current partition}
k=1
10: end if
11: end for
12: end for

of the number of targets for each cell Wji . If this estimate is larger than one, we
j
split the cell Wji into N̂x sub-cells, denoted as
j
N̂x
Ws+

s=1 . (21)
We then add a new partition, consisting of the new sub-cells along with the other
cells in pi , to the list of partitions obtained by the Distance Partitioning.
We illustrate the sub-partitioning algorithm in Algorithm 12, where the splitting
operation on a cell is shown by a function
 
j
split N̂x , Wji . (22)
j
We give the details for obtaining the ml estimate N̂x and choosing the function
split ( · , · ) in the subsections below.
j
Computing N̂x
For this operation, we assume that the function γ( · ) determining the expected
number measurements generated by a target is constant i.e., γ( · ) = γ. Each tar-
get generates measurements independently of the other targets, and the number
of generated measurements by each target is distributed with the Poisson distribu-
tion, Pois ( · , γ). The likelihood function for the number of targets corresponding
to a cell Wji is given as
   
j
p Wji Nx = n = Pois Wji , γ n . (23)
5 Target Tracking Setup 239

Here, we assumed that the volume covered by a cell is sufficiently small such that
the number of false alarms in the cell is negligible, i.e. there are no false alarms
j
in Wji . The ml estimate N̂x can now be calculated as
 
j j
N̂x = arg max p Wji Nx = n . (24)
n

j
Note that other alternatives can be found for calculating the estimates of Nx , e.g.
utilizing specific knowledge about the target tracking setup, however, both sim-
ulations and experiments have shown that the above suggested method works
well.

The split ( · , · ) Function

An important part of the Sub-Partition function in Algorithm 12 is the function


split ( · , · ), which is used to divide the measurements in a cell into smaller cells.
In both simulations and experiments, we have used K-means clustering to split
the measurements in the cell, results shows that this works well. Note however
that other methods to split the measurements are possible.

11 Remark (Limitations of Sub-Partitioning). Notice that the Sub-Partition algorithm


given in this section can be interpreted to be only a first-order remedy to the problem, and
hence have limited correction capabilities. This is because we do not consider the combina-
tions of the cells when we are adding sub-partitions. In the case, for example, where there
are two pairs of close targets whose cells are merged wrongly by Distance Partitioning, the
sub-partitioning algorithm presented above would add an additional partition for each of
the target pairs (i.e. for each of the wrongly merged cells), but not an additional partition
that contains the split versions of both cells. Consideration of all combinations of (the
wrongly merged) cells seems again to be prohibitive, due to the combinatorial growth in
the number of additional partitions. An idea for the cases where there can be more than
one wrongly merged cells is to add a single additional partition, which contains split ver-
sions of all such cells.

5 Target Tracking Setup


The presented tracking approach is exemplified with a laser sensor tracking hu-
mans at short distance. In this section the tracking setup is defined for both
a pure simulation environment and an experimental realisation with laser data.
The targets are modeled as points with state variables
y T
h i
xk = xk yk vxk vk , (25)
y
where xk and yk are the planar position coordinates of the target, and vxk and vk
are the corresponding velocities. The sensor measurements are given in batches
of Cartesian x and y coordinates as follows;
(j) (j) T
h i
zk , x(j)k y k
. (26)
240 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

14

12

10

−2
−15 −10 −5 0 5 10 15

Figure 2: Birth intensity used in experiments.

Table 1: Parameter values used for simulations (s) and experiments (e).

T Qk Rk γ (i) wβ Qβ
S 1 22 I2 202 I2 10 0.05 blkdiag((100I2 , 400I2 ))
E 0.2 22 I2 0.12 I2 12 0.01 0.01I4

A constant velocity model Rong Li and Jilkov (2003), with sampling time T is
used. In all simulations the probability of detection and probability of survival
are set as pD = 0.99 and pS = 0.99, respectively. The algorithm parameters for
the simulation and experiment are given in Table 1. The surveillance area is
[−1000, 1000](m) × [−1000, 1000](m) for the simulations, and for the real data ex-
periments the surveillance area is a semi circle located at the origin with range
13 m. In the simulations, clutter was generated with a Poisson rate of 10 clutter
measurements per scan, and (unless otherwise stated) each target generated mea-
surements with a Poisson rate of 10 measurements per scan. The birth intensity
in the simulations is
Db (x) = 0.1N (x ; mb , Pb ) + 0.1N (x ; −mb , Pb ), (27a)
T
mb = [250, 250, 0, 0] , Pb = diag([100, 100, 25, 25]). (27b)
For the experiments, the birth intensity Gaussian components are illustrated with
their corresponding one standard deviation ellipsoids in Fig. 2. Each birth inten-
(j)
sity component has a weight wb = 0.1Jb , where the number of components is Jb = 7.
The spawn intensity is
Dβ (x|y) = wβ N (x ; ξ, Qβ ), (28)
where ξ is the target from which the new target is spawned and the values for wβ
and Qβ are given in Table 1.
6 Simulation Results 241

6 Simulation Results
This section presents the results from the simulations using the presented ex-
tended target tracking method. In Section 6.1 a comparison of the partitioning
methods is presented, the results show the increased performance when using
Sub-Partition. A comparison between et-gm-phd and gm-phd is presented in
Section 6.2, where it is shown that et-gm-phd as expected outperforms gm-phd
for extended targets. Section 6.3 presents a comparison of et-gm-phd and gm-
phd for targets that give rise to at most one measurement per time step. Finally,
detailed investigations are carried out about the effects of the possibly unknown
parameter γ in Section 6.4.

6.1 Partitioning Methods


As was noted in Section 4.3, as well as in previous work Granström et al. (2010),
using only Distance Partitioning to obtain a subset of all possible partitions is
insufficient when the extended targets are spatially close. For this reason, Sub-
Partition was introduced to obtain more partitions. In this section, we present the
results from simulations that compare the performance of et-gm-phd tracking
with partitions computed using only Distance Partitioning and with partitions
computed using Distance Partitioning and Sub-Partition. Two scenarios are con-
sidered, both with two targets. The true x, y positions and the distance between
the targets are shown in Fig. 3a and Fig. 3b. At the closest points the targets were
60m and 50m apart, respectively. In the simulations, the targets were modeled
as points that generated measurements with standard deviation of 20m in both
x and y direction. Thus, a measure of target extent can be taken as the two stan-
dard deviation measurement covariance ellipses, which here are circles of radius
40m. In both scenarios these circles partly overlap when the targets are closest to
each other.
Each scenario was simulated 100 times with a constant expected number of mea-
surements per target (γ( · ) = γ) of 5, 10 and 20, respectively. Fig. 4 shows the
resulting sum of weights of the et-gm-phd algorithm. As can be seen, using Sub-
Partition the average sum of weights is closer to the true number of targets. This
is especially clear for targets that generate more measurements per time step, i.e.
when γ is higher.

6.2 Comparison with GM-PHD


This section presents results from a simulation comparison of et-gm-phd and
gm-phd. Note here that the gm-phd filter is applied naively to the simulated
measurements, i.e. it is applied under the (false) assumption that each target
produces at most one measurement per time step. The true targets are shown in
black in Figure 5a, around time 50–52 two target tracks cross at a distance of just
over 50m, at time 67 a new target is spawned 20m from a previous one. Thus the
scenario presents challenges that are typical to multiple target applications.
To compare the et-gm-phd filter with the gm-phd filter, 100 Monte Carlo simu-
242 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

200 200
x [m]

x [m]
0 0

−200 −200

0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100

400 400

350 300
y [m]

y [m]
200
300
100
250 0
200 −100
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100

600 600
Dist. [m]

Dist. [m]
400 400

200 200

0 0
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Time Time

(a) (b)

Figure 3: Two simulation scenarios with spatially close targets. To the left,
in (a), is a scenario where the two targets move closer to each other and then
stand still at a distance of 60m apart. Note that the true y position was 300m
for both targets for the entire simulation. To the right, in (b), is a scenario
where the two targets cross paths, at the closest point they are 50m apart.
The top and middle rows show the true x and y positions over time as a gray
solid line and a black dash-dotted line. The light gray shaded areas show the
target extent, taken as two measurement noise standard deviations (40m).
The bottom row shows the distance between the two targets over time.

3 3 3
Sum of weights

Sum of weights

Sum of weights

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5

1 1 1
10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100
Time Time Time

(a) γ = 5 (b) γ = 10 (c) γ = 20


3 3 3
Sum of weights

Sum of weights

Sum of weights

2.5 2.5 2.5

2 2 2

1.5 1.5 1.5

1 1 1
10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100
Time Time Time

(d) γ = 5 (e) γ = 10 (f) γ = 20

Figure 4: Simulation results for the two scenarios in Fig. 3, comparing dif-
ferent partitioning methods for different values of the expected number of
measurements γ. The top row, (a), (b) and (c), is for the true tracks in Fig. 3a.
The bottom row, (d), (e) and (f), is for the true tracks in Fig. 3b. Black shows
Distance Partitioning with Sub-Partition, gray is only Distance Partitioning.
It is clear from the plots that using Sub-Partition gives significantly better
results, especially when γ is higher.
6 Simulation Results 243

70
1000 30
Ground truth
800 ET-GM-PHD
600
60 GM-PHD
25
x [m]

400

200 50

OSPA, c=60, p=2


20
−200 40 ET-GM-PHD

Cardinality
−400
0 10 20 30 40 50 60 70 80 90 100
GM-PHD
30 15

500
20
10

0
10
y [m]

5
−500 0

−1000 −10 0
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Time Time Time

(a) (b) (c)

Figure 5: Results from multiple target tracking using synthetic data. (a)
The true x and y positions are shown in black, the light gray shaded areas
show the target extent, taken as two measurement noise standard deviations
(40m). (b) Mean ospa (solid lines) ±1 standard deviation (dashed lines). (c)
Mean cardinality compared to the true cardinality.

lations were performed, each with new measurement noise and clutter measure-
ments. The results are shown in Figure 5b and Figure 5c, which show the cor-
responding multi-target measure optimal subpattern assignment metric (ospa)
Schuhmacher et al. (2008), and the cardinality, respectively. In the ospa metric
the parameters are set to p = 2, corresponding to using the 2-norm which is a
standard choice, and c = 60, corresponding to a maximum error equal to three
measurement noise standard deviations. Here, the cardinality is computed as
PJk|k (j)
j=1 wk|k . This sum can be rounded to obtain an integer estimate of target num-
ber Vo and Ma (2006).

It is evident from the two figures that the presented et-gm-phd significantly out-
performs the standard gm-phd, which does not take into account the possibility
of the multiple measurements from single targets. The main difference between
the two filters is the estimation of cardinality, i.e. the number of targets. The et-
gm-phd-filter correctly estimates the cardinality with the exception of when the
new target is spawned – after time 67 there is a small dip in the mean estimated
cardinality, even though Sub-Partition is used. The reason for this is that the tar-
gets are 20m apart which is smaller than the effective target extent determined
by the measurement standard deviation (20m). As in the previous section, assum-
ing that the effective target extent is two standard deviations, which correspond
to circular targets of 40m radius, at 20m distance the measurements overlap sig-
nificantly and the probability that the new target’s measurements were also gen-
erated by the old target, as computed in (12e), is large. As the targets separate,
this probability decreases and the et-gm-phd filter recovers the correct cardinal-
ity. It should still be noted that, in reality, where the targets would probably be
rigid bodies, this type of very close situation is highly unlikely and the results
of the et-gm-phd filter with Sub-Partition would be close to those presented in
Section 6.1.
244 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

Similar simulations have been performed which compare Distance Partitioning


with K-means clustering. Over 1000 Monte Carlo simulations, the tracking re-
sults in terms of mean ospa and mean cardinality are significantly better for dis-
tance partitioning, mainly due to the problem with K-means clustering shown in
Fig. 1.

6.3 Standard Single Measurement Targets


This section investigates how et-gm-phd handles standard targets that produce
at most one measurement per time step, in comparison to standard gm-phd
which is designed under the standard target measurement generation assump-
tion. Note that the measurement set cardinality distribution (i.e. the probability
mass function for the number of measurements generated by a target) for a stan-
dard target contains only a single nonzero element2 at cardinality 1, which is
impossible to model with a Poisson distribution underlying the et-gm-phd fil-
ter. Hence, the case where each target generates measurements whose number
is Poisson distributed with rate γ = 1 is very different from the standard target
measurement generation.
Four targets were simulated in 100 Monte Carlo simulations, and all the targets
were separated, i.e. there were no track crossings or new target spawn. Initially,
in the et-gm-phd filter, γ (j) are all set as γ (j) = 1 in the comparison. The average
sum of weights and the average number of extracted targets (obtained by round-
ing the weights to the nearest integer) for the algorithms are shown in Fig. 6a and
Fig. 6b respectively. As is shown, the sum of weights and number of extracted
targets are too large for the et-gm-phd filter. The reason for this is that when
the expected number of measurements per target (i.e. γ (j) ) is small, the effective
probability of detection3
 
(j) (j) (j)
pD,eff = 1 − e−γ pD (29)
(j)
becomes significantly smaller than one. For example, the case γ (j) = 1 and pD =
(j)
0.99 gives pD,eff = 0.6258. This low effective probability of detection is what
causes the weights in the et-gm-phd filter to become too large.
Actually, this problem has been seen to be inherited by the et-gm-phd filter
from the standard phd filter. We here give a simple explanation to the problem
with low (effective) probability of detection in the phd filter. Assuming no false
alarms, and a single target with existence probability pE , ideally a single detec-
tion should cause the expected number of targets to be unity. However, applying
the standard phd formulae to this simple example, one can calculate the updated
expected number of targets to be 1 + pE (1 − pD ) whose positive bias increases as
pD decreases. We have seen that when the (effective) probability of detection is
2 Note that a standard target always generates a single measurement. Whether no measurements
or a single measurement is obtained from the standard target is determined by the detection process.
3 More correctly, p(j) in (29) is the probability of the event that at least one measurement from
D,eff
the (jth) target is obtained by the sensor.
6 Simulation Results 245

7 8
True True
6 ET-GM-PHD 7 ET-GM-PHD
Sum of weights

GM-PHD GM-PHD

Ext. targets
6
5
5
4
4
3
3
2 2

1 1
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Time Time

(a) γ (j) = 1 (b) γ (j) = 1

5 4.5
True True
4.5 ET-GM-PHD 4 ET-GM-PHD
Sum of weights

GM-PHD GM-PHD

Ext. targets
4
3.5
3.5
3
3
2.5
2.5

2 2

1.5 1.5
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Time Time

(c) γ (j) = 2 (d) γ (j) = 2

Figure 6: Simulation results, comparison of et-gm-phd and gm-phd for


standard targets that produce at most one measurement per time step. The
top row shows results when the parameter γ (j) is set to one, the bottom row
shows results when it is set to two. Due to the low effective probability
of detection, the et-gm-phd weights become too large, resulting in sum of
weights larger than the true number of targets. When each weight is rounded
to the nearest integer to extract targets, results for γ (j) = 2 gives the correct
number of extracted targets.

PJk|k (j)
low, the increase in j=1 wk|k is a manifestation of this type of sensitivity of the
phd type filters.4 A similar sensitivity issue is mentioned in Erdinc et al. (2009)
for the case of no detection.
This problem can be overcome by increasing γ (j) slightly in the et-gm-phd fil-
j
ter, e.g. γ (j) = 2 gives pD,eff = 0.8560 which gives sum of weights and number
of extracted targets that better match the results from gm-phd, see Fig. 6c and
Fig. 6d. Using γ (j) = 3 gives results that are more or less identical to gm-phd,
thus a conclusion that can be drawn is that when tracking standard targets with
an et-gm-phd filter, the parameter γ (j) should not be set too small. The follow-
ing subsection investigates the issue of selecting the parameter γ in more detail.

6.4 Unknown Expected Number of Measurements γ


In the simulations above, the parameters γ = γ (j) was assumed to be known a
priori. Further, in Section 4.3 where Sub-Partitioning is presented, the knowledge
of the Poisson rate γ was used to determine whether a cell should be split or not
4 Some extreme versions of this phenomenon for lower P values are illustrated and investigated
D
in detail in the recent work Orguner et al. (2011).
246 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

to create an additional partition. In this section, some scenarios where γ is not


known a priori are investigated. For the sake of clarity, γ is used to denote the
true Poisson rate with which measurements were generated, and γ̂ is used to
denote the corresponding parameter in the et-gm-phd-filter.
In many real world scenarios, the number of measurements generated by a tar-
get is dependent on the distance between the target and the sensor. Typically,
the longer the distance, the lower the number of measurements generated by the
targets. This is true for sensors such as the laser range sensor, the radar sensor
and even cameras. Thus, it is of interest to evaluate the et-gm-phd-filter in a
scenario where the number of generated measurements varies with the target to
sensor distance. This is simulated in Section 6.4, where the et-gm-phd filter is
compared for the cases when the parameter γ̂ is constant, and when the parame-
ter is modeled as distance varying. Section 6.4 presents results from simulations
where the parameter γ̂ is set incorrectly, and Section 6.4 presents results with tar-
gets of different sizes. Finally, Section 6.4 presents a discussion about the results
from the simulations, and supplies some guidelines into the selection of γ̂.
Distance Varying γ
A scenario was constructed where a target moved such that the target to sensor
distance first decreased, and then increased. The sensor was simulated such that
the true parameter γ depended on the target to sensor distance ρ as follows.



 1, if ρ > 400m

γ(ρ) =  (30)
 
 −0.08ρ + 33.5 , if 100m ≤ ρ ≤ 400m

25,

ρ < 100m
(j)
where b · c is the floor function. Thus, at distances larger than 400m, with pD =
0.99, the effective probability of detection is only 0.6258 (as in the previous sub-
section). Note that the scenario is different from a target that always generates
(j)
one measurement, which is detected with probability pD = 0.99.
Monte Carlo simulations were made with two et-gm-phd-filters: one with con-
stant value γ̂ = 10 and another where γ̂ was selected to be dependent on the
state of the targets via the function (30). The results are shown in Figure 7. For
constant γ̂, the number of targets is underestimated when the true γ is low. This
is due to the fact that the filter expects a target to generate more measurements,
and thus the likelihood that some small number of measurements are all clutter
is higher. However, at distances ρ such that γ (ρ) > 5, γ̂ = 10 works quite well.
When the model (30) for distance dependent γ is assumed known, the results are
much more reasonable and acceptable. The only, and possibly negligible, draw-
back seems to be the number of targets being slightly overestimated. There are
two main reasons for this. The first reason is the low effective probability of de-
tection when γ̂ is low. When γ̂ becomes smaller than 5, this behavior is more
evident. The second reason is that the clutter falling into the region ∆ > 400m
(i.e., when the true parameter is γ = 1) is interpreted as targets to some extent,
which causes a positive, though small, bias in the estimation of number of tar-
6 Simulation Results 247

25
1.6

Sum of weights
20 1.4
1.2
15 1
γ

0.8
10
0.6
5 0.4 Const γ̂
0.2 Dist. dep. γ̂
0
0 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100
Time Time

(a) (b)

Figure 7: Results from the simulation scenario where γ is dependent on the


target to sensor distance. In (a), the true γ is plotted against time, and in
(b) the mean sum of weights is plotted against time. The et-gm-phd-filter
is compared for the cases when the parameter γ̂ is held constant (gray) or is
set to the true distance dependent value (black). The correct target number
is one, thus the sum of weights should be around one. In both cases, at the
beginning and the end of the scenario when the distance is largest and γ = 1,
tracking performance gets worse.

gets. In that region, the target behavior is fairly similar to the clutter behavior
which results in some Gaussian components with small weights surviving until
the situation is resolved.
Incorrect γ Parameter

In this simulation study, the target tracks in Figure 3b were used. Each target
generated measurements with a Poisson rate of γ = 20 and eleven different et-
gm-phd-filters, each using a different γ̂ value, were run. The set of γ̂ values used
is given as
γ̂ = 10, 12, . . . , 28, 30. (31)
The results, in terms of the sum of weights averaged over the Monte Carlo runs,
are shown in Figure 8. The figure shows that for sufficiently separated targets,
the et-gm-phd-filter correctly estimates the number of targets for all values of
γ̂. However, for spatially close targets, the et-gm-phd-filter overestimates the
number of targets when γ̂ is set too low, and underestimates the number of
targets when γ̂ is set too high. This result is actually expected, and is a direct
consequence of the simple sub-partitioning algorithm we use. When γ̂ is too
low, Sub-Partitioning creates an additional partition with too many cells, causing
the overestimation of number of targets. Conversely, when γ̂ is too high, Sub-
Partitioning does not create partitions with sufficient number of cells, causing
the underestimation of number of targets. It is very important to note here that
Sub-Partitioning operation actually runs even when the targets are well separated
and does not cause any overestimation. Our observations show that this is a result
of the fact that additional partitions created (when γ̂ is selected too low) cannot
win over single target partitions when the targets measurements are distributed
in a blob shape. It is only when the two targets approach each other resulting in
an eight-shaped cluster of measurements that the additional partition can gain
248 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

4
Sum of weights

3
100
2
80
1
60
0
10 40
15
20 20

25 Time
0
30
γ̂

Figure 8: Simulation results for various values of the et-gm-phd-filter pa-


rameter γ̂. There are two targets, the true Poisson rate used to generate
measurements for both targets was γ = 20.

dominance over the single target partition. This property, though not proved
mathematically, is considered to be a manifestation of the Poisson property and
the Gaussianity assumptions underlying the measurements.
If the cardinality estimates of the algorithms are rounded to the nearest integer,
an interesting property observed with Figure 8 is that no cardinality error ap-
pears for the cases that satisfy
p p
γ̂ − γ̂ ≤ γ ≤ γ̂ + γ̂. (32)
Thus, when the true parameter
p γ lies in the interval determined by the mean (γ̂)
± one standard deviation ( γ̂), cardinality is expected to be estimated correctly
even for close targets.
Targets of Different Size
In many scenarios, it is possible to encounter multiple targets of different sizes,
thus producing a different number of measurements. This means that two targets
would not have the same Poisson rate γ. In this section, the results are presented
for a scenario with two targets with measurement generating Poisson rates of 10
and 20, respectively. In Monte Carlo simulations, three et-gm-phd-filter were
run with the parameter γ̂ set to 10, 15 and 20, respectively. This corresponds to
using either the true value of the smaller target, the mean of both, or the true
value of the larger target. The results, in terms of average sum of weights over
time are shown in Figure 9. When the targets are spatially separated, all three
filters perform equally well. However, when the targets are spatially close, the
target with γ̂ set to the mean of the true γs performs better than the others.
7 Experiment Results 249

2.5

Sum of weights
2

1.5

10
1 15
20

10 20 30 40 50 60 70 80 90 100
Time

Figure 9: Simulation results from a scenario with two targets of different


sizes. The two targets have the true Poisson rates γ = 10 and γ = 20, respec-
tively. The legend refers to the filter parameter γ̂.

Discussion
The simulation results above show that the et-gm-phd-filter works well even
when γ̂ , γ, except when γ < 5 or targets are spatially close. For γ < 5, the filter
is more sensitive, and a correct value for γ̂ is increasingly important. For targets
that are spatially close, it is important for γ̂ to be a good estimate of γ, since the
Sub-Partition algorithm relies on γ̂. When such a good estimate is unavailable, a
more advanced sub-partitioning algorithm seems to be necessary for robustness.
With the proposed sub-partitioning procedure in this work, our findings support
the intuitive conclusion that the true parameter γ should be in one standard
deviation uncertainty region around the mean γ̂ of the Poisson distribution for a
reasonable performance for close targets.
The simulation with different target sizes shows that the close target case in this
example is harder to tackle than the others. A possible solution is to adaptively es-
timate the parameters γ̂ for each Gaussian mixture component based on the pre-
vious measurements. Another solution, which is possibly more straightforward,
is to use a state dependent γ̂ parameter, where the state contains information
about the target extent, which can readily be estimated, see e.g. Granström et al.
(2011b); Lundquist et al. (2011); Baum et al. (2010); Baum and Hanebeck (2011);
Zhu et al. (2011). Using the estimated shape and size, and a model of the sensor
that is used, γ̂ can then be estimated with reasonable accuracy. This has indeed
recently been performed using an et-gm-phd-filter Granström et al. (2011b).

7 Experiment Results
This section presents results from experiments with data from two data sets ac-
quired with a laser range sensor. The experiments are included more as a proof
of concept and as a potential application, rather than as an exhaustive evaluation
of the presented target tracking filter. The measurements were collected using a
sick lms laser range sensor. The sensor measures range every 0.5◦ over a 180◦
surveillance area. Ranges shorter than 13 m were converted to (x, y) measure-
ments using a polar to Cartesian transformation.
250 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

The two data sets contain 411 and 400 laser range sweeps in total, respectively.
During the data collection humans moved through the surveillance area, entering
the surveillance area at different times. The laser sensor was at the waist level of
the humans.
Since there is no ground truth available it is difficult to obtain a definite measure
of target tracking quality, however by examining the raw data we were able to
observe the true cardinality, which can thus be compared to the estimated cardi-
nality.
Section 7.1 presents results from an experiment with spatially close targets, and
Section 7.2 presents results from an experiment with both spatially close targets
and occlusion.

7.1 Experiment with Close Targets


In this experiment, a data set containing 411 laser range scans was used. The data
set contains two human targets that repeatedly move towards and away from
each other, moving right next to each other at several times. The two targets
passed each other at close distance moving in the opposite direction, represent-
ing instances in time when the targets were close for short periods of time. The
targets also moved close to each other moving in the same direction, representing
instances in time when the targets were close for longer periods of time.
The locations of the extracted Gaussian components are shown in Fig. 10a, the
number of extracted targets is shown in Fig. 10b and the sum of weights are
shown in Fig. 10c. Around time 320 there is a decrease in the number of extracted
targets for three time steps, in all other situations the filter handles the two tar-
gets without problem. Thus, using Sub-Partition with K-means as split ( · , · )
function, the et-gm-phd filter can be said to handle most of the spatially close
target cases.

7.2 Experiment with Occlusion


In this experiment, a dataset containing 400 laser range scans was used. The data
set contains four human targets that move through the surveillance area, however
there are at most three targets present at any one time. The first target enters the
surveillance area at time k = 22 and proceeds to the center of the surveillance
area where he remains still for the remainder of the experiment. The second
target enters the surveillance area at time k = 38 and repeatedly moves in front
of and behind the first target. The third target enters and exits at time k = 283
and k = 310, respectively. The last target enters and exits at time k = 345 and
k = 362, respectively.
This case requires a state dependent (i.e. variable) probability of detection pD ( · )
selection for the targets. Otherwise, i.e. with a constant probability of detection
assumption, when a target is occluded, this would be interpreted as the exit of
the target from the area of surveillance while it is only the disappearance of the
target behind another. The variable pD is modeled as a function of the mean,
7 Experiment Results 251

12

10

y [m] 6

0
−10 −5 0 5 10
x [m]
(a)

2
No. ext. targets

1.5

0.5

0
0 50 100 150 200 250 300 350 400 450
Time
(b)

2.5
Sum of weights

1.5

0.5

0
0 50 100 150 200 250 300 350 400 450
Time
(c)

Figure 10: Experiment results, two human targets moving close to each
other. Note that in (a) the gray scale indicates the time line, lighter gray
is earlier time steps, darker is later time steps. In (b), the number of ex-
tracted targets (black) is compared to the ground truth (gray). In (c) the sum
of weights is shown. Around time 320 the cardinality is underestimated for
three time steps.

covariance and the weights of the Gaussian components. The intuition behind
this idea is that the knowledge of the targets that are present, i.e. the knowledge
of the estimated Gaussian components of the phd, can be used to determine what
parts of the surveillance area are likely to be in view of the sensor, and which
parts are not. Leaving the details of the variable pD calculation to Appendix A.2,
252 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

we present the results below.


The locations of the extracted Gaussian components are shown in Fig. 11a, the
number of the extracted targets is shown in Fig. 11b and the sum of weights are
shown in Fig. 11c. In total, there are six situations where one target is occluded
by another. The extracted number of targets is wrong in two of these situations,
where the occluded target is spatially very close to (right next to) the target which
is causing the occlusion. The et-gm-phd filter correctly estimates the cardinality
in four out of six occlusions.
Thus, using the suggested variable pD , the filter can correctly predict the target
while it is occluded, provided that it is not very close to another target while the
PJk|k (j)
occlusion is happening. If j=1 wk|k is rounded to the nearest integer there is no
cardinality error for the first four occlusions. However, as the target exits the oc-
PJk|k (j)
cluded area there is a “jump” in j=1 wk|k around times k = 75, k = 125, k = 175
and k = 210, see Fig. 11c. We have seen that this “jumping” behavior is caused by
the sensitivity of the cardinality estimates of the PHD filter to detections when
(j)
pD is set to a low value, which is the case when the target is half occluded while
it gets out of the occluded region. This is the same phenomenon observed with
low effective probability of detection in Section 6.3.

8 Conclusions and Future Work


In this paper a Gaussian mixture implementation of the probability hypothesis
density filter for tracking extended targets was presented. It was shown that
all possible partitions of the measurement set does not need to be considered,
instead it is sufficient to consider a subset of partitions, as long as this subset
contain the most probable partitions. A simple method for finding this subset of
all measurement set partitions was described. This partitioning method is com-
plemented with a sub-partitioning strategy to handle the cases that involve close
targets better. Simulations and experiments have shown that the proposed filter
is capable of tracking extended targets in cluttered measurements. The number
of targets is estimated correctly even for most of the cases when tracks are close.
The detailed investigations carried out gave some guidelines about the selection
of the Poisson rate for the cases when it is unknown. Using inhomogeneous de-
tection probabilities in the surveillance region, it was shown that targets can be
tracked as they move through occluded parts of the surveillance area.
In recent work, a cardinalized PHD filter Mahler (2007) for extended targets has
been presented Orguner et al. (2011). This filter has less sensitive estimates of the
number of targets. Initial step have also been taken towards including estimation
of target extent in the et-gm-phd-filter Granström et al. (2011b). More work is
needed in both of these research directions.
A further interesting research can be to see the potential use of the partitioning
algorithms presented in this work with more conventional multiple target track-
8 Conclusions and Future Work 253

12

10

y [m] 6

0
−10 −5 0 5 10
x [m]
(a)

3
No. ext. targets

2.5

1.5

0.5

0
0 50 100 150 200 250 300 350 400
Time
(b)

3.5

3
Sum of weights

2.5

1.5

0.5

0
0 50 100 150 200 250 300 350 400
Time
(c)

Figure 11: Experiment results, targets moving in and out of occluded regions
of the surveillance area. Note that in (a) the gray scale indicates the time line,
lighter gray is earlier time steps, darker is later time steps. In (b), the number
of extracted targets (black) is compared to the ground truth (gray). In (c) the
sum of weights is shown.

ing algorithms. A comparison of such algorithms with the et-gm-phd filter can
illustrate the advantage coming from the use of the random set framework.
254 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

Algorithm 13 Find partition p that satisfies the conditions of theorem 10


n o
Require: Set of measurements Z = z(1) , z(2) , . . . , z(Nz ) , where Nz is the number
of measurements.
n o
1: Set p0 = {z(1) }, {z(2) }, . . . , {z(Nz ) } i.e., set Wj0 = {z(j) } for j = 1, . . . , Nz .
2: Set i = 1.
3: Calculate all the pairwise distances between the cells of pi−1 as
 
i−1
ηst = min d z(m) , z(n) (33)
zm ∈Wsi−1
zn ∈Wti−1
i−1
4: If min1≤s,t≤|pi−1 | ηst > d` , then stop the algorithm, since pi−1 is a partition
satisfying the conditions of the theorem.
i−1
5: Otherwise, combine all cells that satisfy ηst ≤ d` to form a single cell.
i
6: Set p to be the set of cells obtained in Step 5.
7: Set i = i + 1 and go to Step 3.

A Appendix
A.1 Proof of Theorem 10
The proof is composed of two parts.
• We first prove that there is a partition satisfying the conditions of the theo-
rem. The proof is constructive. Consider the algorithm in Algorithm 13.
In the algorithm, one first forms a partition formed of singleton sets of the
individual measurements and then combine the cells of this cluster until
conditions of the theorem are satisfied. 
• We need to prove that the partition satisfying the conditions of the theorem
is unique. The proof is by contradiction. Suppose that there are two parti-
tions pi and pj satisfying the conditions of the theorem. Then, there must
j
be a measurement zm ∈ Z such that the cells Wmi i 3 zm and Wmj 3 zm are
j
different, i.e., Wmi i , Wmj . This requires an additional measurement zn ∈ Z
j
that is in one of Wmi i , Wmj but not in the other. Without loss of general-
j
ity, suppose zn ∈ Wmi i and zn < Wmj . Then since zn ∈ Wmi i , we know that
  j
d z(m) , z(n) ≤ d` . However, this contradicts the fact that zn < Wmj which
means that pj does not satisfy the conditions of the theorem. Then our ini-
tial assumption that there are two partitions satisfying the conditions of the
theorem must be wrong. The proof is complete. 

A.2 Variable Probability of Detection for the Laser Sensor


With the variable probability of detection function we reduce pD behind (i.e. at
larger range from the sensor) each component of the phd according to the weight
A Appendix 255

and bearing standard deviation of the Gaussian components.


For a given point x in the surveillance area, the probability of detection pD (x) is
computed as
 
v
pD (x) = max pD,min , pD
s  
v
X
(i) σs  −(ϕ − ϕ (i) )2 
pD = pD,0 − w exp   (34)
(i)
σ̄ ϕ (i) 2 σ̄ ϕ (i)

i:r>r
where
• pD,min is the minimum probability of detection value a target can have;
• pD,0 is the nominal probability of detection of the targets when they are not
occluded;
• r and ϕ are the range and bearing of point x with respect to the sensor
respectively;
• r (i) and ϕ (i) are the range and bearing of the ith Gaussian component with
respect to the sensor respectively;
• w(i) is the weight of the ith component;
• σ̄ϕ (i) is defined as



 σmax , if σϕ (i) > σmax

σ̄ϕ (i) , σ , if σϕ (i) < σmin (35)

 min

σ (i) , otherwise

ϕ
where σϕ (i) is the bearing standard deviation of the ith component given as
r
(i)
σϕ (i) , u T (i) Pp uϕ (i) (36)
ϕ

(i)
Here, Pp is the position covariance of the ith component and uϕ (i) is the
unit vector orthogonal to the range direction from the ith component to the
sensor.
• The constant term σs is used to scale the bearing standard deviation.
Intuitively, the operation of (34) is to reduce the nominal probability of detection
at a point depending on the weights, means and standard deviations of the com-
ponents of the last estimated phd which have smaller range values than the range
of the point also taking into account the angular proximity of such components
to the bearing of the point in the exponential function in (34).
256 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

Bibliography
D. Arthur and S. Vassilvitskii. k-means++: The Advantages of Careful Seeding.
In Proceedings of the ACM-SIAM symposium on Discrete algorithms, pages
1027–1035, Philadelphi, PA, USA, January 2007.
M. Baum and U. D. Hanebeck. Shape tracking of extended objects and group
targets with star-convex RHMs. In Proceedings of the International Conference
on Information Fusion, pages 338–345, Chicago, IL, USA, July 2011.
M. Baum, B. Noack, and U. D. Hanebeck. Extended object and group tracking
with elliptic random hypersurface models. In Proceedings of the International
Conference on Information Fusion, pages 1–8, Edinburgh, UK, July 2010.
C. M. Bishop. Pattern recognition and machine learning. Springer, New York, NY,
USA, 2006. ISBN 0-387-31073-8.
Y. Boers, H. Driessen, J. Torstensson, M. Trieb, R. Karlsson, and F. Gustafsson.
Track-before-detect algorithm for tracking extended targets. IEE Proceedings
of Radar, Sonar and Navigation, 153(4):345–351, August 2006.
D. E. Clark, K. Panta, and B.-N. Vo. The GM-PHD Filter Multiple Target Tracker.
In Proceedings of the International Conference on Information Fusion, Flo-
rence, Italy, July 2006.
O. Erdinc, P. Willett, and Y. Bar-Shalom. The bin-occupancy filter and its con-
nection to the PHD filters. IEEE Transactions on Signal Processing, 57(11):
4232–4246, November 2009.
K. Gilholm and D. Salmond. Spatial distribution model for tracking extended
objects. IEE Proceedings of Radar, Sonar and Navigation, 152(5):364–371, Oc-
tober 2005.
K. Gilholm, S. Godsill, S. Maskell, and D. Salmond. Poisson models for extended
target and group tracking. In Proceedings of Signal and Data Processing of
Small Targets, volume 5913, pages 230–241, San Diego, CA, USA, August 2005.
SPIE.
K. Granström, C. Lundquist, and U Orguner. A Gaussian mixture PHD filter for
extended target tracking. In Proceedings of the International Conference on
Information Fusion, Edinburgh, UK, July 2010.
K. Granström, C. Lundquist, and U. Orguner. Extended target tracking using a
Gaussian-mixture PHD filter. IEEE Transactions on Aerospace and Electronic
Systems, 2011a. Under review.
K. Granström, C. Lundquist, and U Orguner. Tracking rectangular and elliptical
extended targets using laser measurements. In Proceedings of the International
Conference on Information Fusion, Chicago, IL, USA, July 2011b.
T. Hastie, R. Tibshirani, and J. H. Friedman. The elements of statistical learning :
Bibliography 257

data mining, inference, and prediction. Springer, New York, NY, USA, 2 edition,
2009.

C. Lundquist, K. Granström, and U. Orguner. Estimating the shape of targets with


a PHD filter. In Proceedings of the International Conference on Information
Fusion, Chicago, IL, USA, July 2011.

R. P. S. Mahler. Multitarget Bayes filtering via first-order multitarget moments.


IEEE Transactions on Aerospace and Electronic Systems, 39(4):1152–1178, Oc-
tober 2003.

R. P. S. Mahler. Statistical Multisource-Multitarget Information Fusion. Artech


House, Boston, MA, USA, 2007.

R. P. S. Mahler. PHD filters for nonstandard targets, I: Extended targets. In


Proceedings of the International Conference on Information Fusion, pages 915–
921, Seattle, WA, USA, July 2009.

U. Orguner, C. Lundquist, and K. Granström. Extended target tracking with a


cardinalized probability hypothesis density filter. In Proceedings of the In-
ternational Conference on Information Fusion, pages 65–72, Chicago, IL, USA,
July 2011.

R. Ostrovsky, Y. Rabani, L. J. Schulman, and C. Swamy. The Effectiveness of Lloyd-


Type Methods for the k-Means Problem. In Proceedings of the IEEE Symposium
on Foundations of Computer Science (FOCS), pages 165–174, Berkeley, CA,
USA, October 2006.

K. Panta, D. E. Clark, and Ba-Ngu Vo. Data association and track management for
the Gaussian mixture probability hypothesis density filter. IEEE Transactions
on Aerospace and Electronic Systems, 45(3):1003–1016, July 2009.

X. Rong Li and V. P. Jilkov. Survey of maneuvering target tracking: Part I. Dy-


namic models. IEEE Transactions on Aerospace and Electronic Systems, 39(4):
1333–1364, October 2003.

G.-C. Rota. The number of partitions of a set. The American Mathematical


Monthly, 71(5):498–504, May 1964. ISSN 00029890.

D. Schuhmacher, B.-T. Vo, and B.-N. Vo. A consistent metric for performance
evaluation of multi-object filters. IEEE Transactions on Signal Processing, 56
(8):3447–3457, August 2008.

L. Sorber. k-means++, 2011. URL [http://www.mathworks.com/


matlabcentral/fileexchange/28804-k-means++]. Accessed 30-
August-2011.

M. Ulmke, O. Erdinc, and P. Willett. Gaussian Mixture Cardinalized PHD Filter


for Ground Moving Target Tracking. In Proceedings of the International Con-
ference on Information Fusion, pages 1–8, Quebec City, Canada, July 2007.
258 Paper E Extended Target Tracking Using a Gaussian-Mixture PHD Filter

B.-N. Vo and W.-K. Ma. The Gaussian mixture probability hypothesis density
filter. IEEE Transactions on Signal Processing, 54(11):4091–4104, November
2006.
H. Zhu, C. Han, and C. Li. An extended target tracking method with random
finite set observations. In Proceedings of the International Conference on In-
formation Fusion, pages 73–78, Chicago, IL, USA, July 2011.
Paper F
Estimating the Shape of Targets with
a PHD Filter

Authors: Christian Lundquist, Karl Granström and Umut Orguner

Edited version of the paper:


C. Lundquist, K. Granström, and U. Orguner. Estimating the shape of
targets with a PHD filter. In Proceedings of the International Confer-
ence on Information Fusion, Chicago, IL, USA, July 2011a.
Estimating the Shape of Targets with a PHD
Filter

Christian Lundquist, Karl Granström and Umut Orguner

Dept. of Electrical Engineering,


Linköping University,
SE–581 83 Linköping, Sweden
[email protected], [email protected], [email protected]

Abstract
This paper presents a framework for tracking extended targets which
give rise to a structured set of measurements per each scan. The con-
cept of a measurement generating point (mgp) which is defined on
the boundary of each target is introduced. The tracking framework
contains an hybrid state space where mgps and the measurements are
modeled by random finite sets and target states by random vectors.
The target states are assumed to be partitioned into linear and non-
linear components and a Rao-Blackwellized particle filter is used for
their estimation. For each state particle, a probability hypothesis den-
sity (phd) filter is utilized for estimating the conditional set of mgps
given the target states. The phd kept for each particle serves as a use-
ful means to represent information in the set of measurements about
the target states. The early results obtained show promising perfor-
mance with stable target following capability and reasonable shape
estimates.

1 Introduction
In target tracking, the task is to detect, track and identify an unknown number
of targets using measurements that are affected by noise and clutter. In recent
years, so called extended target tracking has received increasing research atten-
tion. In extended target tracking, the classic point target assumption is relaxed
and the target tracking framework is modified to handle multiple measurements
per target and time step. The multiple measurements per target raise interest-
ing possibilities to estimate the target’s size and shape in addition to the target’s
position, velocity and heading. Typical sensors where targets cause multiple mea-
surements are cameras, automotive radars and laser range sensors – especially
cameras and laser range sensors give measurements with a high degree of struc-
ture.
Using finite set statistics (fisst) Mahler has derived rigorous tools for multiple tar-

261
262 Paper F Estimating the Shape of Targets with a PHD Filter

get tracking, see Mahler (2003) and Mahler (2007). In the Probability Hypothesis
Density filter (phd-filter) the targets and measurements are treated as random
finite sets (rfs); an implementation where the phd-intensity is approximated
using a mixture of Gaussians has been presented in Vo and Ma (2006). An ex-
tension of the phd-filter to handle extended targets that give Poisson distributed
measurements, as in Gilholm et al. (2005), was given in Mahler (2009). A Gaus-
sian mixture implementation of this extended target phd-filter was recently pre-
sented in Granström et al. (2010).

In the recent work by Mullane et al. Mullane et al. (2011), the Simultaneous Local-
ization and Mapping (slam) problem was addressed using a Bayesian approach.
The measurements and feature map is modeled using rfs, giving a framework
in which the number and location of map features is jointly estimated with the
vehicle trajectory. A Rao-Blackwellized implementation is suggested where the
vehicle trajectory is estimated with a particle filter and the map is handled using
a Gaussian-mixture (gm-phd) filter.

In this paper, the ideas presented in Mullane et al. (2011) are utilized to solve
the problem of estimating the shape of an extended target. The sensors men-
tioned earlier typically obtain point measurements from reflection points on the
surface of the target. The realizations of the reflection points in this framework
are called measurement generating points (mgps) and their positions on the tar-
get are estimated as a means to estimate the shape, size and position of the target.
By considering the mgp as an rfs it is possible to create a measurement model
which better adapts to the actual and visible reflection points of the target. The
target state vector is estimated using a particle filter, and the rfs of mgps are esti-
mated with a gm-phd-filter. The target’s state vector is too large to be efficiently
estimated with a particle filter, and the linear and nonlinear parts are therefore
partitioned and estimated in a marginalized or Rao-Blackwellized particle filter,
see e.g., Schön et al. (2005). The joint estimation of the target density and the den-
sity of the mgps on the boundary leads to a Rao-Blackwellized implementation
of the joint particle and phd filter.

Modeling of extended targets in this work is very similar to active contours Blake
and Isard (1998) and snakes Kass et al. (1988), which model the outline of an
object based on 2 dimensional image data, studied extensively in computer vi-
sion. Detecting and identifying shapes in cluttered point clouds has been stud-
ied in Srivastava and Jermyn (2009), where the mgps on the surface of the target
are denoted samples. The underlying low-level processing involved assumes rea-
sonably the existence of image data from which features (like Harris corners) or
feature maps (like Harris measures etc.) can be extracted. The so-called Conden-
sation algorithm Isard and Blake (1998), for example, searches for suitable sets
of features in the image data (or in the feature map) iteratively for each of its dif-
ferent hypotheses (particles) in the calculation of the likelihoods. The approach
presented in this work carries the distinction of working only with a single set
of measurement points provided most of the time by thresholding of the raw
sensor data (like conventional target tracking) and hence is mainly aimed at ap-
2 The RFS Extended Target Model 263

plications where the users (of the sensors) either are unable to reach or do not
have the computation power to work with the raw sensor data. The mapping of
the boundaries of complex objects is also achieved in Lazarus et al. (2010) using
splinegon techniques and a range sensor such as e.g., laser mounted on moving
vehicles.
There are also several other approaches to extended target tracking in the litera-
ture. Gilholm presents in Gilholm and Salmond (2005); Gilholm et al. (2005) an
approach where it is assumed that the number of received target measurements
is Poisson distributed, hence several measurements may originate from the same
target. In Boers et al. (2006) a similar approach is presented where raw data is
considered in a track-before-detect framework and no data association is needed.
Monte Carlo methods are applied to solve the extended target tracking problem
in Vermaak et al. (2005); Angelova and Mihaylova (2008) and random matrices
are used by Koch in Koch (2008).
The paper is organized as follows. Section 2 introduces the rfs of measurements
and mgps as well as the model of the target. The filter framework is described in
Section 3 and an implementation is given in Section 4. A simple example with
simulation results is shown in Section 5. Section 6 contains conclusions and some
thoughts on future work.

2 The RFS Extended Target Model


Consider an extended target whose characteristics, e.g., position, heading, ve-
locity and shape are described by a state vector xk . The target is observed by a
(m)
sensor, which provides point observations zk of the target. The number of mea-
surements zk at any given time is not fixed due to detection uncertainty, spurious
measurements and unknown number of reflection points on the surface of the
target. Since the target is extended, it potentially gives rise to more than one mea-
surement per time step. Hence, the measurements obtained from one target may
be represented by a random finite set (rfs) of observations
 
(1) (z )
Zk = zk , . . . , zk k (1)

where k refers to discrete time. One way to associate the point measurements
to the target is to consider a number of reflection points, called measurement
generating points (mgp) in this work, on the surface of the target. A mgp is
defined on a one dimensional coordinate axis denoted as s ∈ R on the boundary
of the target, which can generally be restricted to an interval [smin , smax ] called
the mgp-space. All mgps belonging to a single target may be modeled as a finite
set shown as
 
(1) (sk )
Sk = sk , . . . , sk . (2)

A simple example is shown in Figure 1, where visible mgps and point measure-
ments are illustrated.
264 Paper F Estimating the Shape of Targets with a PHD Filter

11.5

11

10.5
y [m]

10 (i)
sk
(m)
zk
9.5

4 4.5 5 5.5 6 6.5


x [m]

Figure 1: A shape with visible mgps Sk (n) and measurements Zk (l).

In a Bayesian framework, the states and the measurements are treated as realiza-
tions of random variables. Since in this case both the measurements and the mgps
are represented by finite sets, the concept of rfs is required for the Bayesian esti-
mation of the target state. An rfs is a finite set valued random variable, whose el-
ements are random variables and the number of elements are treated as a random
non-negative integer variable. In a filtering and estimation framework the proba-
bility density is a very useful descriptor of an rfs. Standard tools and notation to
compute densities and integrations of random variables are not appropriate for
rfs and therefore finite set statistics fisst, developed by Mahler Mahler (2003)
as a practical mathematical tool to deal with rfs, has to be applied.

Consider again the target in Figure 1, and let S be the rfs of all mgps on the
surface of the target. Furthermore let Sk ⊂ S be the detectable mgps. As the
target moves new mgps might become visible to the sensor, either by observing a
new side of the object or by observing new reflection points, e.g., a wheel arch of
a car, becoming visible when the angle between the target and the sensor changes.
The new mgps are modeled by an independent rfs Bk (xk ) ⊂ S depending on the
target state xk . The set of visible mgps evolves in time as
Sk = Fk (Sk−1 ) ∪ Bk (xk ). (3)
where Fk ( · ) is a set function modeling the survival (or death) of the previous
2 The RFS Extended Target Model 265

visible mgps. The rfs transition density is then given by


X
p(Sk |Sk−1 , xk ) = pS (S̄|Sk−1 )pB (Sk − S̄|xk ) (4)
S̄⊆Sk

where pS ( · |Sk−1 ) is the transition density of the observable set of mgps, i.e., those
who are in the field of view of the sensor, and pB ( · |xk ) is the density of the rfs
Bk (xk ) of the new visible mgps. Furthermore, the target dynamics is modeled
by a standard Markov process with transition density p(xk |xk−1 ) and the joint
transition density of the target and its mgps is
p(Sk , xk |Sk−1 , xk−1 ) = p(Sk |Sk−1 , xk )p(xk |xk−1 ). (5)

The rfs of measurements Zk received at time k by the sensor from a target with
state vector xk and rfs Sk of mgp is modeled as
[
Zk = Hk (s, xk ) ∪ Ck (6)
s∈Sk

where Hk (s, xk ) is the rfs of measurement produced by the mgps modeling also
the non-perfect detection process and C is the clutter rfs. The rfs of measure-
ments produced by a mgp is modeled by a Bernoulli rfs, i.e., it is Hk (s, xk ) = ∅
with probability 1 − PD (s|xk ) and Hk (s, xk ) = {z} with the probability density
PD (s|xk )p(z|s, xk ), cf. Mahler (2007). The probability of detection is denoted by
PD (s|xk ) defined over the mgp-space, and p(z|s, xk ) is the likelihood that the mgp
s produces the measurement z. The likelihood function of the measurement rfs
Zk is
X
p(Zk |xk , Sk ) = pZ (Z̄|Sk , xk )pC (Zk − Z̄|xk ) (7)
Z̄⊆Zk

where pZ (Z̄|Sk , xk ) is the density representing the rfs of detected observations.


The density of the clutter rfs C is denoted pC ( · |xk ) which is Poisson in cardinality
and uniform in space.

The relation between the shape of the target and some of the state components
is highly nonlinear and this makes an application of the particle filter suitable.
Suppose that the target state xk can be partitioned into a nonlinear and a linear
part according to
h iT
xk = xnk xlk , (8)
the state space model for xk is given by
xnk+1 = fkn (xnk ) + Fkn (xnk )xlk + wnk (9a)
xlk+1 = fkl (xnk ) + Fkl (xnk )xlk + wlk . (9b)
For a single measurement, the measurement model is given by

n
N ( · ; ẑk (xk , sk ), Rk ) if z target generated


zk ∼  (9c)
 ck ( · )
 if z clutter generated
266 Paper F Estimating the Shape of Targets with a PHD Filter

where
ẑk (xnk , sk ) = hnk (xnk , sk ) + Hk (xnk )xlk (9d)
The process noise wk is assumed to be
" n#
Qkn
" #
w 0
wk = kl ∼ N (0, Qk ), Qk = (10)
wk 0 Qkl
and the process covariance is here assumed to be diagonal. The measurement
(i)
noise is modeled as ek ∼ N (0; Rk ). The mgps sk are assumed to be stationary
on the boundary of the target and the motion model for them is selected to be a
random walk model.

3 Filtering Framework
n l
The aim of the filtering algorithm is to estimate p(x1:k , x1:k , Sk |Z0:k ), the joint pos-
terior given the set of measurements Z0:k . The posterior may be factorized ac-
cording to
n
p(x1:k , xlk , Sk |Z0:k ) = p(xlk |x0:k
n n
, Sk , Z0:k )p(Sk |x0:k n
, Z0:k )p(x1:k |Z0:k ) (11)
and the three posteriors for the states of a target xlk , xnk and the mgps Sk =
(1) (s )
{sk , . . . , sk k } can be computed separately. However, since the filter recursion
n
needed to estimate the probability density of the mgp rfs p(Sk |x0:k , Z0:k ) includes
set integrals it is numerically intractable, and it is necessary to find a tractable ap-
proximation. The first order moment of an rfs is represented by a phd, denoted
D. For the rfs Sk , the phd is a nonnegative function, such that for each region S
in the space of mgps
Z
Dk (x)dx = E(|Sk ∩ S|) = ŝk (12)
S
where ŝk is the expected number of elements in the region S, and the peaks of
the phd indicates locations of mgps with high probability. If the cardinality of
elements in the rfs Sk is Poisson distributed and the elements are i.i.d., then the
probability density of Sk can be computed from the phd according to
Q
s∈Sk Dk (s)
p(Sk ) = R . (13)
exp( Dk (s)ds)

The phd-filter propagates the phd of an rfs in time and provides a reasonable
approximation to the multitarget Bayesian filter. The filter recursion for the joint
rfs and target state posterior density is described in the following. First the tar-
get state and the mgps are predicted as described in section 3.1 and thereafter a
measurement update of the mgps and the target states is performed as described
in Section 3.2.
3 Filtering Framework 267

3.1 Prediction

Given the prior


n l
p(x1:k−1 , x1:k−1 , Sk−1 |Z0:k−1 )
l n n n
= p(x1:k−1 |x0:k−1 , Sk−1 , Z0:k−1 )p(Sk−1 |x0:k−1 , Z0:k−1 )p(x1:k−1 |Z0:k−1 ) (14)

1. Predict first the nonlinear state components xn , i.e.,


n n
p(x1:k−1 |Z0:k−1 ) → p(x1:k |Z0:k−1 ) (15)
using (9a).

2. Predict the linear state components xl , i.e.,


l n l n
p(x1:k−1 |x0:k−1 , Z0:k−1 ) → p(x1:k |x0:k , Z0:k−1 ) (16)
using (9a) and (9b). Compare with the marginalized particle filter Schön et al.
(2005), where the linear components are predicted and updated with a Kalman
filter, see Line 3-6 in Algorithm 14.

3. Finally predict the mgps according to


n n
p(Sk−1 |x0:k , Z0:k−1 ) → p(Sk |x0:k , Z0:k−1 ). (17)
The predicted rfs is approximated by a Poisson rfs with phd D(sk |x0:k , Z0:k−1 )
as
Q
s∈Sk D(s|x0:k , Z0:k−1 )
p(Sk |x0:k , Z0:k−1 ) ≈ R (18)
exp( D(s|x0:k , Z0:k−1 )ds)
cf. (13). Using this approximation and the motion model (3), the phd prediction
equation is given by
Dk|k−1 (s|x0:k ) = PS (s)Dk−1 (s|x0:k ) + Db,k (s) (19)
where Db,k (s) is the phd of the birth rfs Bk and PS ( · ) models the probability of
survival of a mgp. Note that the abbreviation
Dk|k−1 (s|x0:k ) , D(sk |x0:k , Z0:k−1 ) (20)
is adopted for clarity.

3.2 Measurement Update

4. Update the mgps according to


n n
p(Sk |x0:k , Z0:k−1 ) → p(Sk |x0:k , Z0:k ) (21)
268 Paper F Estimating the Shape of Targets with a PHD Filter

where the update is described by

n
p(Zk |Sk , xnk )p(Sk |x0:k
n
, Z0:k−1 )
p(Sk |x0:k , Z0:k ) = n . (22)
p(Zk |Z0:k−1 , x0:k )
Like the case in the prediction, the posterior is approximated by a Poisson rfs
with phd D(sk |x0:k , Z0:k ) as
Q
s∈Sk D(s|x0:k , Z0:k )
p(Sk |x0:k , Z0:k ) ≈ R (23)
exp( D(s|x0:k , Z0:k )ds)
cf. (13). Using this approximation and the measurement model (6), the phd cor-
rector equation is given by
"
Dk (s|x0:k ) = Dk|k−1 (s|x0:k ) 1 − PD (s|xk )
#
X Λ(zk |sk , x)
+ R (24)
z∈Z
λc k (z) + Λ(z k |ζ, x k )D k|k−1 (ζ|x 0:k )dζ
k

where
Λ(zk |sk , xk ) = PD (sk |xk )p(zk |sk , xk ). (25)
For clarity the abbreviation
Dk (s|x0:k ) , D(sk |x0:k , Z0:k ) (26)
is used.

5. Update the nonlinear state components xn


n n
p(x1:k |Z0:k−1 ) → p(x1:k |Z0:k ) (27)
where the update is modeled as

n n
p(xnk |xnk−1 )p(x1:k−1
n
|Z0:k−1 )
p(x1:k |Z0:k ) =p(Zk |Z0:k−1 , x0:k ) . (28)
p(Zk |Z0:k−1 )
n
The term p(Zk |Z0:k−1 , x0:k ) is given by a set integration over Sk , according to
Z
n n
p(Zk |Z0:k−1 , x0:k ) = p(Zk , Sk |Z0:k−1 , x0:k )δSk . (29)
n
To avoid the set integral, note that p(Zk |Z0:k−1 , x0:k ) is the normalization constant
in (22), hence it may be rewritten according to

n
p(Zk |Sk , xnk )p(Sk |x0:k
n
, Z0:k−1 )
p(Zk |Z0:k−1 , x0:k )= n . (30)
p(Sk |x0:k , Z0:k )
Note that Sk is only contained in the RHS, thus the relation holds for arbitrary
choices of Sk . We substitute here the last estimated Sk by making further approx-
imations as follows. Using the Poisson rfs approximations (18) and (23) and the
3 Filtering Framework 269

rfs measurement likelihood (7) it holds that

n
p(Zk |Sk , xnk )p(Sk |x0:k
n
, Z0:k−1 )
p(Zk |Z0:k−1 , x0:k )= n
p(Sk |x0:k , Z0:k )
Q
s∈Sk Dk|k−1 (s|x0:k )
R
exp( Dk|k−1 (s|x0:k )ds)
≈ p(Zk |Sk , xnk ) Q
| {z } s∈Sk Dk (s|x0:k )
R
exp( Dk (s|x0:k )ds)
,A
Q R
Dk|k−1 (s|x0:k ) exp( Dk (s|x0:k )ds)
s∈Sk
=A Q R (31a)
s∈Sk Dk (s|x0:k ) exp( Dk|−1 (s|x0:k )ds)
| {z }| {z }
,B ,C
where
(i) (i) (i)
Y Y
A≈ (λc(z) + PD (sk )p(z(i) |sk , xnk )) (1 − PD (sk )) (31b)
z∈Z s∈S̄k

with the ith mgp being the one closest to measurement i according to
(i)
s(i) = arg min ||zk − s|| (31c)
s∈Sk
z
S̄k = Sk \{s(i) }i=1
k
(31d)
Furthermore,
ŝk
Y 1
B= (i)
(31e)
Λ(z|sk ,xk )
(1 − PD (s(i) )) +
P
i=1
z∈Z λc(z)+ Λ(z|ζ,x )D (ζ|x )dζ
R
k k 0:k

C = exp(ŝk − ŝk|k−1 ) (31f)


and Λ( · ) is given in (25). The number of predicted and updated mgps are
Z
ŝk = Dk (sk |x0:k , Z0:k )dsk (32a)
Z
ŝk|k−1 = Dk (sk |x0:k , Z0:k−1 )dsk . (32b)

6. Update the linear state components xl


l n l n
p(x1:k |x0:k , Sk , Z0:k−1 ) → p(x1:k |x0:k , Sk , Z0:k ) (33)
l
The update of x1:k is conditioned on the rfs of the mgps Sk . As an approximation,
the posterior of xl is written as
l n l n b
p(x1:k |x0:k , Sk Z0:k ) ≈ p(x1:k |x0:k , Sk , Z0:k ) (34)

where the estimate b


Sk is calculated from the posterior phd Dk (s|x0:k ). The linear
270 Paper F Estimating the Shape of Targets with a PHD Filter

components are updated as described in Line 46 to 51 in Algorithm 14.

7. The joint posterior is now given by multiplying the separate posteriors from
(21), (27) and (33) as
n l l n b n n
p(x1:k , x1:k , Sk |Z0:k ) ≈ p(x1:k |x0:k , Sk , Z0:k )p(Sk |x0:k , Z0:k )p(x1:k |Z0:k ) (35)

An implementation of the proposed filter recursion is presented in the next sec-


tion.

4 RBPF-PHD Implementation
A Rao-Blackwellized implementation is utilized to estimate the target state. The
nonlinear target state xn is propagated with a particle filter and the linear state
vector is propagated with a Kalman filter. The mgps are described by a set Sk ,
(i)
which are estimated with a phd-filter. There exists one phd Dk ( · ) for each
particle i of the target state. The overall summary statistics is represented by
 N
(i) n,(i) l,(i) l,(i) (i) (i)
wk , x0:k , x̂k , Pk , Dk (s|x0:k ) (36)
i=1
where
n,(i)
• x0:k is the ith particle for the nonlinear part of the target state.
l,(i) l,(i)
• x̂k , Pk are the ith mean and covariance for the linear part of the target
state.
(i)
• wk is the weights for the ith particle of the target state.
(i) (i)
• Dk (s|x0:k ) is the phd representing the mgps for the ith particle of the target
state.
(i) (i)
In this work the phd Dk (s|xk ) is represented with by a Gaussian mixture and a
realization of the gm-phd filter recursion is described in Section 4.1, see also Vo
and Ma (2006). The update of the nonlinear state components based on the phd
is described in Section 4.2, and a pseudo code for the proposed filter is given in
Section 4.3.

4.1 PHD Prediction and Update


(i)
The mgps on the surface of the ith particle are approximated with a phd Dk (s|xk )
represented by a Gaussian mixture. The prior phd is given by
(i)
Jk−1  
(i) (i) (i,j) (i,j) (i,j)
X
Dk−1 (s|xk−1 ) = ηk−1 N s, µk−1 , Pk−1 . (37)
j=1
4 RBPF-PHD Implementation 271

(i) (i,j) (i,j) (i,j)


This is a mixture of Jk−1 Gaussian components, with ηk , µk and Pk being
the weights, means and covariances, respectively, for the jth Gaussian component
of the ith particle.

The prediction of mgps in the motion model (3) is given by the union of prior
mgps and new mgp, which is approximated by the phd prediction equation (19).
The resulting predicted phd represented by a Gaussian mixture is then given as
(i)
Jk|k−1  
(i) (i) (i,j) (i,j) (i,j)
X
Dk|k−1 (s|xk ) = ηk|k−1 N s, µk|k−1 , Pk|k−1 . (38)
j=1

(i) (i) (i) (i)


where Jk|k−1 = Jk−1 + Jb,k and Jb,k is the number of new Gaussian components. The
posterior phd is computed in the gm form as
(i)
! k|k−1
J
(i) n,(i) n,(i) n,(i) n,(i) (i,j) n,(i)
X X
Dk (s|xk ) = Dk|k−1 (s|xk ) 1− pD (s|xk ) + DG,k (z, s|xk ) (39)
z∈Zk j=1

where the components are given by


 
(i,j) n,(i) (i,j) (i,j) (i,j)
DG,k (z, s|xk ) = ηk|k N s; µk|k , Pk|k (40a)
(i,j) n,(i)
(i,j) PD (s)ηk|k−1 q(i,j) (z, xk )
ηk|k = (i)
(40b)
PJk|k−1 (i,`) (i,`) n,(i)
λc(z) + `=1 PD (s)ηk|k−1 q (z, xk )
 
n,(i) n(i) (i,j) (i,j)
where q(i,j) (z, xk ) = N z; ẑ(xk , ηk|k−1 ), Sk and where ẑ( · ) is given in (9d).
(i,j) (i,j) (i,j)
The terms ηk|k−1 , Pk|k and Sk can be obtained using standard filtering tech-
niques, such as ekf or ukf. The clutter density is c(z) = U (z), where λc is the
average number of clutter measurements and U (z) is a uniform distribution on
the measurement space.

4.2 Particle Update

The transition density is chosen as the proposal distribution and hence the parti-
cles are sampled as
n,(i) n,(i) n,(i)
xk ∼ p(xk |x0:k−1 ) (41a)
(i) (i) (i)
wk = p(Zk |Z0:k−1 , x0:k )wk−1 (41b)
where the prediction can be done using (9a) with the substitution of the last es-
timated values of the linear components. Note that the linear components are
treated as noise here and that therefore the process noise is
n,(i) l,(i) n,(i)
Qkn + Fkn (xk−1 )Pk−1 (Fkn (xk−1 ))T
272 Paper F Estimating the Shape of Targets with a PHD Filter

when sampling the particles. The update of the weights can be computed accord-
ing to (31), where
(i) (i)
Jk|k−1 J
k|k
(i) (i,j) (i) (i,j)
X X
ŝk|k−1 = ηk|k−1 and ŝk = ηk (42)
j=1 j=1

are the sums of the predicted and updated phd weights for the ith particle.

4.3 Algorithm
The algorithm is given in Algorithm 14. The state estimates are extracted by
taking the weighted mean
N
1 X (i) (i)
x̂k = P (i)
wk xk . (43)
wk i=1
The mgps must not be extracted since they are only considered as a mean to
estimate the target state.

Algorithm 14 Prediction and Update


(i) n,(i) l,(i) (i) (i) (i)
Require: {wk−1 , xk−1 , x̂k−1 , Pk−1 , Dk−1 }N
i=1 , where the phd is composed of Dk−1 =
(i)
i,j (i,j) (i,j) J
k−1
{ηk−1 , µk−1 , Σk−1 }j=1
1: for i = 1 to N do
Predict xn
n,(i) n,(i) n,(i) l,(i)
2: xk|k−1 ∼ p(x1:k |x0:k−1 , x̂k−1 ) {cf. (9a)}
Predict xl
n (i) n n
3: Sk−1 = Fk−1 Pk−1 (Fk−1 )T + Qk−1
(i)
4: Kk−1 = Fkl Pk−1 (Fk−1
n −1
)T Sk−1
l,(i) l,(i) n,(i) l,(i)
5: x̂k|k−1 = Fkl x̂k−1 + fk−1
l n
+ Kk−1 (xk|k−1 − fk−1 n
− Fk−1 x̂k−1 )
(i) l (i) l l −1
6: Pk|k−1 = Fk−1 Pk−1 (Fk−1 )T + Qk−1 − Kk−1 Sk−1 Kk−1
Predict PHD D
7: `=0
(i)
8: for j = 1 to Jb,k do
9: ` = `+1
(i,`) (i,j) (i,`) (i,j) (i,`) (i,j)
10: ηk|k−1 = ηb,k , µk|k−1 = µb,k , Σk|k−1 = Σb,k
11: end for
(i)
12: for j = 1 to Jk do
13: ` = `+1
(i,`) (i,j) (i,`) (i,j) (i,`) (i,j) s
14: ηk|k−1 = PS ηk−1 , µk|k−1 = µk−1 , Σk|k−1 = Σk−1 + Qk−1
15: end for
(i)
(i) (i) PJk|k−1 (i,j)
16: Jk|k−1 = l, ŝk|k−1 = j=1 ηk|k−1
Update PHD D
(i)
17: for j = 1 to Jk|k−1 do
4 RBPF-PHD Implementation 273

(i,j) n,(i) (i,j) n,(i) l,(i)


18: ẑk|k−1 = hk (xk|k−1 , µk|k−1 ) + Hk (xk|k−1 )x̂k|k−1 {cf. (9d)}

19: ∇Hk = h (x, s)
∂s k n,(i) (i,j) {cf. (48)}
x=xk|k−1 ,s=µk|k−1
n,(i) n,(i) (i) n,(i)
20: Sk = Hk (xk|k−1 )Pk|k−1 HkT (xk|k−1 )
(i,j) (i,j) n,(i)
21: Sk = ∇Hk Σk|k−1 (∇Hk )T + Sk + R
(i,j) (i,j) (i,j)
22: Kk = Σk|k−1 (∇Hk )T (Sk )−1
23: end for
(i)
24: for missed detections j = 1 to Jk|k−1 do
(i,j) (i,j) (i,j) (i,j) (i,j) (i,j)
25: ηk = (1 − PD )ηk|k−1 , µk = µk|k−1 , Σk = Σk|k−1
26: end for
27: `=0
28: for each detection zk ∈ Zk do
29: ` = `+1
(i)
30: for j = 1 to Jk|k−1 do
(i,j) (i,j) (i,j)
31: τ (j) = PD ηk|k−1 N (zk ; ẑk|k−1 , Sk )
(i)
(i,`Jk|k−1 +j) (i,j) (i,j) (i,j)
32: µk = µk|k−1 + Kk (zk − zk|k−1 )
(i)
(i,`J +j) (i,j) (i,j) (i,j) (i,j)
33: Σk k|k−1 = Σk|k−1 + Kk Sk (Kk )T
34: end for
(i)
35: for j = 1 to Jk|k−1 do
(i) (i)
(i,`J +j) PJk|k−1
36: ηk k|k−1 = τ (i) /(λc(zk ) + m=1 τ (m) )
37: end for
38: end for
(i) (i) (i) PJk(i) (i,j)
39: Jk = (` + 1)Jk|k−1 , ŝk = j=1 ηk
40: merge and prune Gaussians
Update xn
(i) n (i)
41: w̃k = p(Zk |Z0:k−1 , x0:k )wk−1 {cf. (31)}
42: end for
(i) (i) P (j)
43: wk = w̃k / N j=1 w̃k {Normalize}
44: resample if necessary
Update xl
45: for i = 1 to N do
(i)
(i) (i,j) ŝ (i)
46: extract mgps b S = {ŝ } k from D
k k j=1 k
47: compute association matrix A using e.g., NN
(i)
48: Sk = Hkl Pk|k−1 (Hkl )T + Rk
(i)
49: Kk = Pk|k−1 (Hkl )T (Sk )−1
l,(i) l,(i) n,(i) (i) n,(i) l,(i)
50: x̂k = x̂k|k−1 + Kk (AZ − hk (xk|k , b
Sk ) − Hk (xk|k )x̂k|k−1 )
(i) (i)
51: Pk|k = Pk|k−1 − Kk Sk (Kk )T
52: end for
274 Paper F Estimating the Shape of Targets with a PHD Filter

(i)
(i) n,(i) l,(i) (i) (i) (i) (i) i,j (i,j) (i,j) J
53: return {wk , xk , x̂k , Pk , Dk }N k
i=1 and ŝk , with Dk = {ηk , µk , Σk }j=1

5 Simulation Example

In this section, we are going to use a simple example to illustrate the filtering
solution we propose. In this example, we use the following specific target and
measurement models.

• Target State Model: The state vector of the target is given by


h iT
x= x y v ψ t , (44)
where (x, y) is the planar Cartesian position of the target. It may be any
point related to the target, however in this example it is assumed to be the
center position. Furthermore, v is the velocity and ψ is the heading angle.
The shape and size of the target is described by the shape component t.
Considering a simple but common shape e.g., a rectangle, its size may be
represented by a length l and a width w. In this example a simple coordi-
nated turn motion model is used
xk+1 = xk + T vk cos(ψ) (45a)
yk+1 = yk + T vk sin(ψ) (45b)
vk+1 = vk + wv (45c)
ψk+1 = ψk + wψ (45d)
tk+1 = tk + wt (45e)
where T is the sample time. The heading angle ψ is clearly modeled as a
nonlinear component.

• Measurements and Shape Model: Let the point measurement be a Carte-


sian position, i.e.,
h iT
z = x̄ ȳ . (46)
on the border of the target. Common point measurement sensors, such as
radar and laser typically measure range and bearing. The polar represen-
tation of the sensor data has here been converted to a Cartesian represen-
tation. This also means that the measurement noise covariance must be
converted and that it not necessarily is diagonal.

A mgp is defined on a coordinate s along the border of the target which has
a spline representation of order d. In this case, the measurement model (9d)
may be written according to
h iT
ẑ = h(xn , S) + x y (47a)
5 Simulation Example 275

with
" #
n ΠBσ G σ 0
h(x , S) = Rot(ψ) Γ (47b)
0 ΠBσ G σ
h i
Π = 1 s s2 · · · sd−1 (47c)
where Rot is a rotation matrix, Bσand Gσ
is a basis matrix and a placement
matrix, respectively. The vector Γ includes the shape parameters t. Spline
curves are well described in e.g., Blake and Isard (1998).

A nice property using spline curves is that they are continuously differen-
tiable, which is important if the ekf is utilized to update the Gaussian com-
ponents of the phd, compare with Line 19 in Algorithm 14. The derivative
of the measurement model (47a) with respect to mgps is

∇Hk = h (xn , s) (48a)
∂s k " #
∇ΠBσ G σ 0
= Rot(ψ) Γ (48b)
0 ∇ΠBσ G σ
h i
∇Π = 0 1 s · · · (d − 1)sd−2 (48c)

In the models above, it is obvious that the center position x, y and velocity v of
the target are linear in the motion and in the measurement model. The heading ψ
and target shape state t are nonlinear, hence the state vector may be partitioned
as
h iT
xl = x y v (49a)
h iT
xn = ψ t . (49b)
Comparing (9d) and (47a) the linear measurement matrix is
" #
1 0 0
H= (50)
0 1 0
Note that the mgps s are given by a one dimensional position along the border,
and that they are highly nonlinear in the measurement model (47a) which justi-
fies the use of particle filter.

A rectangle target is considered in this simulation. The shape state is the length l
and width w. The vector Γ is
[l 0 −l −l −l 0 l l l ...
Γ = (51)
... −w −w −w 0 w w w 0 −w]T
A d = 3 order spline is considered. In the simulation the extended target moves
from right to left throughout the surveillance area. The orientation of the target
is 0rad and the length and width are l = 5m and w = 2.5m respectively. The
target trajectory and the surveillance region of the sensor located at the origin
are illustrated in Figure 2. As seen from the figure, the target starts far from the
276 Paper F Estimating the Shape of Targets with a PHD Filter

50

40

30
y [m]

20

10

−50 −40 −30 −20 −10 0 10 20 30 40 50


x [m]

Figure 2: The trajectory used in the simulation. The target trajectory is


showed in black. The sensor is located in the origin, the surveillance area
boundary is showed with a dashed gray line.

sensor and hence has few measurements initially. Neither width nor the length
of the target is observable. As the target travels towards the sensor, the length
and the width observability increases first but when the target starts to get close
to the sensor, the width observability is lost again in which case only the length of
the target is visible. When the target passes by the sensor the width becomes once
more observable. With the distance between the target and the sensor increasing
towards the end of the scenario, observability of both quantities decrease.
The position estimation results are shown in Figure 3. Clearly the algorithm can
follow the target along the x-axis. The shape estimates are illustrated in Figure 4.
As predicted the shape estimates degrade as the target gets further from the sen-
sor. Though the variance of the estimates are large at times, the algorithm seems
to be capable of keeping them at a smaller level than the initial values. As the
target approaches the sensor, the general trend in the variances is to decrease
though there exist also occasional increases. Further investigations must be done
with various initializations and stability and the robustness must be evaluated
more thoroughly in the future work.

6 Conclusions
In this work a new approach to track extended targets has been presented. Point
measurements are produced by an unknown number of measurement generating
points (mgps) on the surface of the target. These mgps are modeled as a random
finite set rfs and used as a means to estimate the shape, the size and the position
of a target. The state of the target is propagated with Rao-Blackwellized parti-
cle filter (nonlinear part) and Kalman filters (linear part); the measurements are
processed with a gm-phd-filter.
First simulation results show that this approach is promising. A simple rectangu-
6 Conclusions 277

Figure 3: The position estimates (x and y) of the algorithm. The true quanti-
ties are shown with grey lines. The mean x and y estimates are in black lines
and their 4 standard deviation uncertainty calculated from the particles are
illustrated with grey clouds around the estimates.

Figure 4: The length and the width estimates of the algorithm. The true
quantities are shown with grey lines. The mean length and width estimates
are in black lines and their 4 standard deviation uncertainty calculated from
the particles are illustrated with grey clouds around the estimates.
278 Paper F Estimating the Shape of Targets with a PHD Filter

lar target is followed with a rather stable performance and it remains for future
fork to estimate the shape of more complex targets. The next step will be to vali-
date the proposed method on more challenging real data collected on e.g., a road,
where different type of shapes, such as pedestrians and vehicles are visible. The
stability and robustness must also be evaluated thoroughly with different initial-
izations and settings.
Bibliography 279

Bibliography
D. Angelova and L. Mihaylova. Extended object tracking using Monte Carlo meth-
ods. IEEE Transactions on Signal Processing, 56(2):825–832, February 2008.
A. Blake and M. Isard. Active contours. Springer, London, UK, 1998.
Y. Boers, H. Driessen, J. Torstensson, M. Trieb, R. Karlsson, and F. Gustafsson.
Track-before-detect algorithm for tracking extended targets. IEE Proceedings
of Radar, Sonar and Navigation, 153(4):345–351, August 2006.
K. Gilholm and D. Salmond. Spatial distribution model for tracking extended
objects. IEE Proceedings of Radar, Sonar and Navigation, 152(5):364–371, Oc-
tober 2005.
K. Gilholm, S. Godsill, S. Maskell, and D. Salmond. Poisson models for extended
target and group tracking. In Proceedings of Signal and Data Processing of
Small Targets, volume 5913, pages 230–241, San Diego, CA, USA, August 2005.
SPIE.
K. Granström, C. Lundquist, and U Orguner. A Gaussian mixture PHD filter for
extended target tracking. In Proceedings of the International Conference on
Information Fusion, Edinburgh, UK, July 2010.
M. Isard and A. Blake. CONDENSATION-conditional density propagation for
visual tracking. International Journal of Computer Vision, 29(1):5–28, August
1998.
M. Kass, A. Witkin, and D. Terzopoulos. Snakes: Active contour models. Inter-
national Journal of Computer Vision, 1(4):321–331, January 1988.
J. W. Koch. Bayesian approach to extended object and cluster tracking using
random matrices. IEEE Transactions on Aerospace and Electronic Systems, 44
(3):1042–1059, July 2008.
S. Lazarus, A. Tsourdos, B.A. White, P.M.G. Silson, and R. Zandbikowski. Air-
borne vehicle mapping of curvilinear objects using 2-D splinegon. IEEE Trans-
actions on Instrumentation and Measurement, 59(7):1941–1954, July 2010.
C. Lundquist, K. Granström, and U. Orguner. Estimating the shape of targets with
a PHD filter. In Proceedings of the International Conference on Information
Fusion, Chicago, IL, USA, July 2011.
R. P. S. Mahler. Multitarget Bayes filtering via first-order multitarget moments.
IEEE Transactions on Aerospace and Electronic Systems, 39(4):1152–1178, Oc-
tober 2003.
R. P. S. Mahler. Statistical Multisource-Multitarget Information Fusion. Artech
House, Boston, MA, USA, 2007.
R. P. S. Mahler. PHD filters for nonstandard targets, I: Extended targets. In
280 Paper F Estimating the Shape of Targets with a PHD Filter

Proceedings of the International Conference on Information Fusion, pages 915–


921, Seattle, WA, USA, July 2009.
J. Mullane, B-.N. Vo, M. D. Adams, and B-.T. Vo. A random-finite-set approach
to bayesian slam. IEEE Transactions on Robotics, PP(99):1–15, February 2011.
T. B. Schön, F. Gustafsson, and P.-J. Nordlund. Marginalized particle filters for
mixed linear/nonlinear state-space models. IEEE Transactions on Signal Pro-
cessing, 53(7):2279–2289, July 2005.
A. Srivastava and I. H. Jermyn. Looking for shapes in two-dimensional cluttered
point clouds. IEEE Transactions on Pattern Analysis and Machine Intelligence,
31:1616–1629, September 2009. ISSN 0162-8828.
J. Vermaak, N. Ikoma, and S. J. Godsill. Sequential Monte Carlo framework for
extended object tracking. IEE Proceedings of Radar, Sonar and Navigation, 152
(5):353–363, October 2005.
B.-N. Vo and W.-K. Ma. The Gaussian mixture probability hypothesis density
filter. IEEE Transactions on Signal Processing, 54(11):4091–4104, November
2006.
Paper G
Tire Radii and Vehicle Trajectory
Estimation Using a Marginalized
Particle Filter

Authors: Christian Lundquist, Emre Özkan and Fredrik Gustafsson

Edited version of the paper:


C. Lundquist, E. Özkan, and F. Gustafsson. Tire radii estimation using
a marginalized particle filter. IEEE Transactions on Intelligent Trans-
portation Systems, 2011d. Submitted.

The paper presents data that was previously published in:


E. Özkan, C. Lundquist, and F. Gustafsson. A Bayesian approach to
jointly estimate tire radii and vehicle trajectory. In Proceedings of the
IEEE Conference on Intelligent Transportation Systems, Washington
DC, USA, October 2011.

Preliminary version:
Technical Report LiTH-ISY-R-3029, Dept. of Electrical Engineering, Lin-
köping University, SE-581 83 Linköping, Sweden.
Tire Radii and Vehicle Trajectory Estimation
Using a Marginalized Particle Filter

Christian Lundquist, Emre Özkan and Fredrik Gustafsson

Dept. of Electrical Engineering,


Linköping University,
SE–581 83 Linköping, Sweden
[email protected], [email protected], [email protected]

Abstract
Measurements of individual wheel speeds and absolute position from
a global navigation satellite system (gnss) are used for high-precision
estimation of vehicle tire radii in this work. The radii deviation from
its nominal value is modeled as a Gaussian process and included as
noise components in a vehicle model. The novelty lies in a Bayesian
approach to estimate online both the state vector of the vehicle model
and noise parameters using a marginalized particle filter. No model
approximations are needed such as in previously proposed algorithms
based on the extended Kalman filter. The proposed approach out-
performs common methods used for joint state and parameter esti-
mation when compared with respect to accuracy and computational
time. Field tests show that the absolute radius can be estimated with
millimeter accuracy, while the relative wheel radius on one axle is
estimated with submillimeter accuracy.

1 Introduction
Tire pressure monitoring has become an integral part of todays’ automotive ac-
tive safety concept Velupillai and Guvenc (2007). With the announcement of the
us standard (fmvss 138) and the European standard (ece r-64) vehicle manufac-
turer must provide a robust solution to early detect tire pressure loss. A direct
way to measure the tire pressure is to equip the wheel with a pressure sensor and
transmit the information wireless to the vehicle body. In the harsh environment
that the tires are exposed to, e.g., water, salt, ice and heat, the sensors are error-
prone. Furthermore, the pressure sensors in each wheel is expensive and their
existence complicates the wheel changes. Therefore, indirect solutions have been
introduced on the market lately, see e.g., Persson et al. (2001). In this paper an
indirect approach is presented where the tire radius is estimated simultaneously
with the vehicle trajectory. This is done under the assumption that there is a
relation between a reduction in tire radius and tire pressure.

283
284 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

The indirect approach presented in Persson et al. (2001) is only based on the
wheel speed sensors and it is shown how a tire pressure loss in one wheel leads
to a relative radii error between the wheels. In later publications gps measure-
ments have also been included to improve the radius estimation and even make it
possible to estimate the absolute radius of one tire. The effective tire radius is es-
timated using a simple least-squares regression technique in Miller et al. (2001).
A non-linear observer approach to estimate the tire radius is presented in Carl-
son and Gerdes (2005), and a second order sliding mode observer is used to esti-
mate the tire radius in M’sirdi et al. (2006); Shraim et al. (2006). A simultaneous
maximum likelihood calibration and sensor position and orientation estimation
approach for mobile robots is presented in Censi et al. (2008), where among other
parameters the wheel radii are estimated. An observer based fault detection al-
gorithm, which detects tire radius errors using yaw rate measurement and a bi-
cycle model of the vehicle, is described in Patwardhan et al. (1997). An extended
Kalman filter based approach is presented in Ersanilli et al. (2009), where the tire
radius is estimated via vertical chassis accelerations.

In the present contribution the difference between the nominal and the actual
value of the tire radius is modeled as a Gaussian stochastic process, where both
the mean and the covariance are unknown and time varying. The vehicle dynam-
ics and the measurements are described by a general state space model and the
noise statistics are treated as the unknown parameters of the model. Hence the
joint estimation of the state vector and the unknown model parameters based
on available measurements is required. Such a problem is hard to treat as both
the state estimation and the parameter estimation stages affects the performance
of the other. The structure of this nonlinear problem with biased and unknown
noise requires utilizing approximative algorithms. The particle filter (pf) pro-
vides one generic approach to non-linear non-Gaussian filtering. Probably the
most common way to handle a joint parameter and state estimation problem is
by augmenting the state vector with the unknown parameters and redefine the
problem as a filtering problem, see e.g., Liu and West (2001); Julier and Durrant-
Whyte (1995); Ersanilli et al. (2009). The approach has some major disadvantages
as it requires artificial dynamics for the static parameters and it leads also to an
increase in the state dimension which is not preferable for particle filters. This
is particularly important to stress in automotive applications, where the compu-
tational cost must be kept low. In this work, an efficient Bayesian method is pro-
posed for approximating the joint density of the unknown parameters and the
state based on the particle filters and the marginalization concept, introduced
in Saha et al. (2010). The statistics of the posterior distribution of the unknown
noise parameters is propagated recursively conditional on the nonlinear states of
the particle filter. An earlier version of this work was presented in Özkan et al.
(2011), and the current, significantly improved version also includes comparisons
with other methods.

The proposed method is implemented and tested on real vehicle data. The state
augmentation technique is also implemented and tested, both using the pf and
the extended Kalman Filter (ekf) variants. Furthermore, a method where the
2 Model 285

unknown noise enters in the measurement signal instead of, as proposed, in the
input signal is presented for comparison. The proposed method, the two aug-
mented state methods and the measurement noise method are compared with re-
spect to estimation accuracy and robustness. The proposed method outperforms
the competitors both in the simulations and the real data.
The paper is outlined as follows. The problem is formulated together with the
vehicle model and with the description of the sensors in Section 2. The estimation
procedure and the filter approach is described in Section 3. The proposed method
is compared to common and similar methods described in the literature, and
these are summarized and put into the context in Section 4. Results based on real
data collected with a production type passenger car are presented in Section 5.
The work is concluded in Section 6.

2 Model
The aim of this work is to jointly estimate the vehicle trajectory and the unknown
tire radius errors based on the available measurements in a Bayesian framework.
Furthermore, the aim is to find appropriate models that can be recast into the
general state space model
xk+1 = f (xk , uk ) + g(xk , uk )wk , (1a)
yk = h(xk , uk ) + d(xk , uk )ek . (1b)
This model suits our noise parameter marginalized particle filter framework.
A four wheeled vehicle model, which is used to estimate the unknown variables
given the velocity and the position measurements, is introduced in this section.
More specifically, the angular velocities of the wheels and gps positions are con-
sidered as the inputs and the measurements, respectively. It is assumed that the
unknown wheel radii affect the vehicle state trajectory through the wheel speed
sensors. The state vector is defined as the planar position and the heading angle
of the vehicle,
h iT
x= x y ψ . (2)
The discrete time motion model for the evolution of the state is given as,
xk+1 = xk + T vk cos ψk (3a)
yk+1 = yk + T vk sin ψk (3b)
ψk+1 = ψk + T ψ̇k , (3c)
where vk is the vehicle longitudinal velocity, ψ̇k is the yaw rate of the vehicle and
T is the sampling time.
Furthermore, the angular velocities of the wheels ωi , i = 1 . . . 4, which are mea-
sured by the abs sensors at each wheel, can be converted to virtual measurements
of the absolute longitudinal velocity and yaw rate as described in Gustafsson et al.
(2001); Gustafsson (2010a). Assuming a front wheel driven vehicle with slip-free
286 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

Figure 1: The notation of the vehicle variables.

motion of the rear wheels the virtual measurements become


ω r + ω4 r
vvirt = 3 (4a)
2
virt ω4 r − ω3 r
ψ̇ = , (4b)
lt
where r is the nominal value of the wheel radius and lt is the wheel track; see
Figure 1 for the notation. In practice, the actual tire radii differ from their nomi-
nal value. They are unknown and needed to be estimated on the run. The wheel
radius errors are defined as the difference between the actual and the nominal val-
ues of the rear left and right wheel radii δ3 , r3 − r and δ4 , r4 − r, respectively.
The actual tire radii are
r3 = r + δ3 (5a)
r4 = r + δ4 . (5b)
Multiplying r3 with ω3 and r4 with ω4 in equations (4) to derive the actual longi-
tudinal velocity v and yaw raw rate ψ results in
ω3 r3 + ω4 r4 ω δ ω δ
v= = vvirt + 3 3 + 4 4 (6a)
2 2 2
ω4 r4 − ω3 r3 virt ω4 δ4 ω3 δ3
ψ̇ = = ψ̇ + − . (6b)
lt lt lt
Substituting these into the motion model (3) gives
ω3 δ3,k ω4 δ4,k
xk+1 = xk + T (vvirt
k + + ) cos ψk (7a)
2 2
ω3 δ3,k ω4 δ4,k
yk+1 = yk + T (vvirt
k + + ) sin ψk (7b)
2 2
ω4 δ4,k ω3 δ3,k
ψk+1 = ψk + T (ψ̇kvirt + − ), (7c)
lt lt
where the input vector u only consists of the wheel speeds, i.e.,
h iT
u = ω3 ω4 (8)

since vvirt
k and ψ̇kvirt are functions of the wheel speeds.
2 Model 287

The motion model (7) can now be written in the form (1a), with
xk + T vvirt
 
k cos ψk  
f (xk , uk ) = yk + T vvirt sin ψ (9a)

k k 
virt

ψk + T ψ̇k

 T ω cos ψ
 3 2 k T ω4 cos 2
ψk 

 T ω3 sin ψk T ω4 sin ψk 
g(xk , uk ) =  2 2
 (9b)
 − T ω3 T ω4


l t l
t

where the process noise term wk is defined as,


" # " # " #!
δ µ3,k Σ3,k 0
wk = 3,k ∼ N (µk , Σk ) = N , . (9c)
δ4,k µ4,k 0 Σ4,k
The process noise, or the tire radius error, is described by two parameters, the
mean value µ and the covariance Σ for the left and the right wheel. Intuitively
the mean value corresponds to the slow time variations from the nominal tire ra-
dius, while the variance corresponds to the fast variations due to tire vibrations.
One interpretation is that the mean value µ models the change in the wheel radii
due to abrasion, tire pressure changes and effects of cornering, and the covari-
ance Σ can account for the vibrations arising from an uneven road surface. An
interesting special case is when Σ3 = Σ4 , which represents homogeneous road
conditions, in comparison with split road surface when Σ3 , Σ4 .
The measurement model (1b) defines the relation between the gps position and
the state variables as follows.
h iT
yk = xGPS
k yGPS
k (10a)
" #
1 0 0
h(xk , uk ) = h(xk ) = x (10b)
0 1 0 k
d(xk , uk ) = I (10c)
where I is the identity matrix and the measurement noise is assumed to be Gaus-
sian with zero mean and known constant covariance R, i.e.,
h iT
ek = ex ey = N (0, R), R = ΣGPS I2 . (10d)
Other sensor measurements are also plausible to include in the measurement
vector, at the cost of a more complex model. For instance, using a yaw rate gyro
to measure the third state, also requires to model the drifting offset in the sensor.
The steering angle can also be converted to yaw rate, but it suffers from dynamic
lag and other dynamic states of the vehicle.
The unknown parameters θ are the radii error biases and the covariances,

θk , µ3,k , µ4,k , Σ3,k , Σ4,k . (11)
The unknown parameters are all subject to change in time. The underlying evo-
lution model for the mean and covariance, i.e. p(µk |µk−1 ) and p(Σk |Σk−1 ) are also
unknown. We later use forgetting factor principle on the statistics to account for
288 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

this factor.
In the following section the joint estimation of the unknown parameters and the
state is described in a Bayesian framework.

3 Parameter and State Estimation


This section focuses on the evaluation of the joint density p(xk , θk |y1:k ) of the state
variable xk defined in (2) and the parameters θk defined in (11), conditioned on
all measurements y1:k from time 1 to k. In order to simplify the calculations, the
target density is decomposed into conditional densities as follows
p(xk , θk |y1:k ) = p(θk |x0:k , y1:k )p(x0:k |y1:k ). (12)
The resulting two densities are estimated recursively. The implementation of the
estimation algorithm is described in three steps. In Section 3.1 we define the
approximations made to derive the state trajectory p(x0:k |y1:k ) and in Section 3.2
we describe the estimation of the sufficient statistics of the parameter distribution
p(θk |x0:k , y1:k ). The predicted trajectory is derived conditional on the parameter
estimates and the estimation of the joint density and the marginal density of the
states p(x0:k |y1:k ) is finalized in Section 3.3.

3.1 State Trajectory


The state trajectory p(x0:k |y1:k ) is approximated by an empirical density of N par-
ticles as follows:
N
(i) (i)
X
p(x0:k |y1:k ) ≈ wk δ(x0:k − x0:k ) (13)
i=1
(i) (i)
where x0:k is a trajectory sample and wk is its related weight. The approximated
state trajectory (13) is propagated with a particle filter, using the sequential
importance sampling scheme. In this scheme, at any time k, first the samples,
which are also denoted by particles, are generated from a proposal distribution
(i)
q(xk |x0:k−1 , y1:k ) by using the particles from time k − 1. Then the weight update
step follows the sampling step. The weights are updated according to
(i) p(yk |xk )p(xk |x0:k−1 , y1:k−1 ) (i)
wk ∝ wk−1 . (14a)
q(xk |x0:k−1 , y1:k )
In the sampling step one can use the state prediction density as the proposal den-
sity while generating the samples. The posterior distribution of the unknown pa-
rameters can be used in computing the state prediction density p(xk |x0:k−1 , y1:k−1 )
according to
Z
p(xk |x0:k−1 , y1:k−1 ) = p(xk |xk−1 , θ)p(θ|x0:k−1 , y1:k−1 )dθ. (15)

The estimation of the parameters is described in the next section, and we will
therefore return to this equation in Section 3.3 and give the final expression.
3 Parameter and State Estimation 289

3.2 Parameter Estimation


In the factorization of the joint density given in equation (12), the distribution
of the unknown parameters (which corresponds to the first term) is computed
conditional on the realization of the state trajectory and the measurements. For
a specific realization of the trajectory, the posterior density can also be written
conditional on the realization of the noise terms
(i) (i)
p(θk |x0:k , y1:k ) = p(θk |w0:k ). (16)
It can be decomposed into a likelihood function and a prior according to Bayes
rule
p(θk |w0:k ) ∝ p(wk |θk )p(θk |w0:k−1 ). (17)
The likelihood function p(wk |θk ) is assumed to be multivariate Gaussian in this
work, as previously mentioned in (9c), and the mean µk and the covariance Σk
are considered unknown parameters θ. In this case a Normal-inverse-Wishart
distribution defines a conjugate prior1 p(θk |w0:k−1 ). Normal-inverse-Wishart dis-
tribution defines a hierarchical Bayesian model given as
wk |µk , Σk ∼ N (µk , Σk ) (18a)
µk |Σk ∼ N (µ̂k|k , γk|k Σk|k ) (18b)
Σk ∼ iW(νk|k , Λk|k ) (18c)
1
 
− 21 (ν+d+1)
∝ |Σk | exp − Tr(Λk|k Σ−1
k ) (18d)
2
where iW(.) denotes the inverse Wishart distribution and d denotes the dimen-
sion of the noise vector wk . The statistics Sw,k , {γk , µ̂k , νk , Λk } can according
to Peterka (1981); Kárný (2006) be recursively updated as follows. The measure-
ment update is
γk|k−1
γk|k = (19a)
1 + γk|k−1
µ̂k|k = µ̂k|k−1 + γk|k (wk − µ̂k|k−1 ) (19b)
νk|k = νk|k−1 + 1 (19c)
1
Λk|k = Λk|k−1 + (µ̂ − wk )(µ̂k|k−1 − wk )T (19d)
1 + γk|k−1 k|k−1
where the statistics of the predictive distributions are given by the time update
step according to
1
γk|k−1 = γ (20a)
λ k−1|k−1
µ̂k|k−1 = µ̂k−1|k−1 (20b)
νk|k−1 = λνk−1|k−1 (20c)
Λk|k−1 = λΛk−1|k−1 . (20d)
1 A family of prior distributions is conjugate to a particular likelihood function if the posterior
distribution belongs to the same family as the prior.
290 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

The scalar real number 0 ≤ λ ≤ 1 is the forgetting factor. The forgetting factor
here helps in the estimation of the dynamic variables. The statistics relies on
1
roughly the measurements within the last h = 1−λ frames/time instances. That
allows the algorithm to adapt the changes in the noise statistics in time. Such an
approach is appropriate when the unknown parameters are slowly varying, and
the underlying parameter evolution is unknown.

3.3 Noise Marginalization


Let us now return to the state prediction in (15). One important advantage of
using the conjugate priors reveals itself here as it is possible to integrate out the
unknown noise parameters as they follow Normal-inverse-Wishart distribution.
The integrand in (15) is the product of a Gaussian distribution and a NiW dis-
tribution and the result of the integral is a Student-t distribution which can be
evaluated analytically. The predictive distribution of wk becomes a multivariate
Student-t density, according to
p(wk |Vk−1 , νk−1 ) = St(µ̂k|k−1 , Λk|k−1 , νk|k−1 − d + 1) (21a)
− 12 (νk|k−1 +1)
1
∝ 1+ w̃T Λ−1 w̃ (21b)
νk|k−1 − d + 1 k k|k−1 k
with w̃k , wk − µ̂k|k−1 and with νk|k−1 − d +1 degrees of freedom. The Student-t dis-
tribution is located at µ̂k|k−1 with the scale parameter Λk|k−1 , and these statistics
are given in (20). Furthermore, if the state transition density p(xk |x0:k−1 , y1:k−1 )
(i)
is used as the proposal distribution q(xk |x0:k−1 , y1:k ), then the the weight update
equation (14) reduces to,
(i) (i) (i)
wk = wk−1 p(yk |xk ). (22)
In the implementation, the noise is first sampled from (21) and used in (1a) in
(i)
order to create samples xk . The samples from (21) can be used directly in the
statistics update (19). The pseudo code of the algorithm used in the simulations
is given in Table 15.

In the proposed method, each particle i keeps its own estimate for the parameters
θ (i) of the unknown process noise. In the importance sampling step, the particles
use their own posterior distribution of the unknown parameters. The weight
update of the particles is made according to the measurement likelihood. The
particles are keeping the unknown parameters, and those which best explain the
observed measurement sequence will survive in time.

The marginal posterior density of the unknown parameters can be computed by


integrating out the states in the joint density
Z N
(i) (i)
X
p(θ|y1:k ) = p(θ|x0:k , y1:k )p(x0:k |y1:k )dx0:k ≈ ωk p(θ|x0:k , y1:k ). (23)
i=1
Then the estimate of the unknown parameters could be computed according to
3 Parameter and State Estimation 291

Algorithm 15 Pseudo Code of the Algorithm


Initialization:
1: for each particle i = 1, .., N do
(i)
2: Sample w0 from (21)
(i)
3: Compute x0 from (1a)
(i) 1
4: Set initial weights ω0 = N
(i)
5: Set initial noise statistics Sw,0 corresponding to each particle
6: end for
Iterations:
7: for k = 1, 2, . . . do
8: for For each particle i = 1, .., N do
(i)
9: Predict noise statistics Sw,k|k−1 using (20)
(i)
10: Sample wk|k−1 from (21)
(i)
11: Compute xk|k−1 from (1a)
(i) (i) (i)
12: update the weights, w̃k = wk−1 p(yk |xk )
(i)
13: Update noise statistics Sw,k using (19)
14: end for
(i)
(i) w̃k
15: Normalize weights, wk = PN (i) .
i=1 w̃k
1
16: Compute Neff = PN (i) 2 .
(w
i=1 k )
17: If Neff ≤ η, Resample the particles.
(i) (i)
Compute state estimate x̂ = N
P
18: i=1 wk xk
19: Compute the parameter estimates using (24)
20: end for

a chosen criterion. As an example, according to the minimum mean square er-


ror (mmse) criterion, the noise mean and variance estimates at time k could be
computed as
N
(i) (i)
X
µ̂k = ωk µ̂k (24a)
i=1
N (i)
X (i) νk −d+1 (i)
Σk =
b ωk (i) Λk , (24b)
i=1 νk −d−1
where the weights are inherited from the particles.
292 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

4 Models for Comparison

In order to evaluate the performance of the proposed method it will be compared


with two other methods. These two methods are described in this section fol-
lowed by the results and discussions in Section 5. The three methods are based
on the same type of information and model. All of them can be described by the
general state space model (1), however, the order and the definition of the model
components and variables vary between them. The approach presented in the
preceding sections will be denoted as marginalized particle filter process noise
(mpf-pn) estimation.

A model where the state vector is augmented with the parameters is described
in Section 4.1, followed by a model where the parameters appear in the measure-
ment noise, instead of in the process noise, in Section 4.2. The augmented state
space model in Section 4.1 can be implemented using both a pf and a Kalman fil-
ter, i.e., summing up to in total four different filters implementations to compare.
The characteristics of the four different methods are summarized in Section 4.3.
Finally, Section 4.4 discusses the velocity measurements, which are used as inputs
in the system.

4.1 Augmented State Vector

One common approach in joint state and parameter estimation is to augment the
state vector with the unknown parameters Liu and West (2001). In this case the
state vector is defined as
h iT
x = x y ψ µ3 µ4 Σ3 Σ4 . (25)
The motion model (7) contains the variables δ3 and δ4 and in order to express
the motion√model using the state variables the relation in (9c) is considered, i.e.,
δ3 = µ3 + Σ3 N (0, 1) and vice versa for δ4 . The state space model can again be
written in the general form (1) with the components according to

ω µ +ω µ
 
xk + T vvirt
 
+ 3 3,k 2 4 4,k cos ψk 
 k
 y + T vvirt + ω3 µ3,k +ω4 µ4,k sin ψ 
  
 k k 2 k
 
 ψ + T ψ̇ virt − ω3 µ3,k −ω4 µ4,k


k k lt
f (xk , uk ) =  (26a)
 
µ

 3,k 


 µ 4,k 


 0 

0
4 Models for Comparison 293

 ω √Σ √
T 3 3,k cos ψ T ω4 Σ4,k cos ψ

 √2 k √2 k 0 0 
 ω3 Σ3,k ω4 Σ4,k 
2 √ sin ψk 2 √ sin ψk 0 0 
 T T
g(xk , uk ) =  (26b)

−ω3 Σ3,k ω4 Σ4,k

 T T 0 0 

 lt lt 

 0 0 I2 0 

0 0 0 I2
h T T T
iT
wk = wn wµ,k wΣ,k (26c)
where wn = N (0, I2 ). The process noises wµ,k is zero mean Gaussian noise. i.e.,
wµ = N (0, Q). (26d)
In order to preserve the positive definite property, the following Markovian model
with Inverse-Gamma distribution is used to propagate the unknown variances
p(Σ3,k |Σ3,k−1 ) = iΓ (α3,k , β3,k ) (27a)
p(Σ4,k |Σ4,k−1 ) = iΓ (α4,k , β4,k ). (27b)
The parameters α and β are chosen such that the mean value is preserved and the
standard deviation is equal to 5 percent of the previous value of the parameter.
E{Σ3,k |Σ3,k−1 } = Σ3,k−1 (28a)
E{Σ4,k |Σ4,k−1 } = Σ4,k−1 (28b)
Std{Σ3,k |Σ3,k−1 } = 0.05Σ3,k−1 (28c)
Std{Σ4,k |Σ4,k−1 } = 0.05Σ4,k−1 . (28d)
Finally, the components of the measurement model (1b) are
h iT
yk = xGPS
k yGPS
k (29a)
" #
1 0 0 0 0
h(xk , uk ) = h(xk ) = x (29b)
0 1 0 0 0 k
d(xk , uk ) = d = I2 (29c)
and the measurement noise is zero mean and the covariance is assumed to be
constant, i.e.,
h iT
ek = ex ey = N (0, R), R = ΣGPS I2 . (29d)
The input signals are the same as defined in (8). Since the unknown parameters
are included in the state vector, there is no need to marginalize the noise param-
eters. Standard particle filter is used to estimate the state. This method will be
referred to as augmented particle filter (aug-pf). The filtering problem can also
be solved with nonlinear Kalman filter (kf) based algorithms such as extended
kf (ekf) or unscented kf (ukf), and this method will be referred to as augmented
Kalman filter (aug-kf).
294 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

4.2 Measurement Noise Estimation


In the fourth method the bias terms are assumed to appear in the noise again,
but in this case they will appear in the measurement noise instead of the process
noise, as in Section 2. Hence, this method is referred to as marginalized particle
filter measurement noise (mpf-mn). The motion model is written as in (3), but
since the radii error now appears in the measurement model, the state vector
must be augmented with the yaw rate and velocity, i.e.,
h iT
x = x y ψ ψ̇ v . (30)
The motion model, in (3), is augmented with the two additional states and the
components of the motion model (1a) become

xk + T vk cos ψk 
 
 y + T v sin ψ 
 k k k
f (xk , uk ) = f (xk ) =  ψk + T ψ̇k  (31a)

ψ̇
 

 k 

vk
 0 0 
 
 0 0 
 
g(xk , uk ) = g(xk ) =  0 0  (31b)
 
T 0 
 
 
0 T
with the zero mean measurement noise w with constant covariance
" #
w
wk = ψ̇ = N (0, Q), Q = diag(σδ23 , σδ24 ). (31c)
wv
The measurement model is now nonlinear and the components in (1b) are
h iT
yk = xgps ygps ψ̇kvirt vvirt k (32a)
h iT
h(xk , uk ) = h(xk ) = x y ψ̇ v (32b)
1 0 0 
 
1 0 0 
g(xk , uk ) = g(uk ) =  ω3 ω4  (32c)
0 − lt

lt 
0 ω23 ω4 
 
2
where the noise is
h iT
ek = ex ey δ3,k δ4,k = N (µk , Σk ) (32d)
h iT
µk = 0 0 µ3,k µ4,k (32e)
Σk = diag([Σx,k , Σy,k , Σ3,k , Σ4,k ]) (32f)
The algorithm described in Section 3 applies to solve the estimation problem.
In this model the process noise parameters are known but the measurement noise
parameters are to be estimated. The same approach described in Section 3 also
4 Models for Comparison 295

Table 1: Summary of Estimation Methods

Method ωi ψ̇ virt , vvirt ψ̇, v θ nx ny nu


mpf-pn u u - w 3 2 2
aug-pf/ aug-kf u u - x 7 2 2
mpf-mn u y x e 5 4 0

applies to this model. At the sampling stage, the state prediction distribution
can be used as the importance distribution. Then the weight update equation
simplifies to,
(i) (i)
wk = wk−1 p(yk |x0:k , y1:k−1 ), (33)
likewise the equation (22). The likelihood computation can be done by marginal-
izing the unknown noise parameters, as it is done for the process noise in equa-
tion (15)
Z
p(yk |x0:k , y1:k−1 ) = p(yk |xk , θ)p(θ|x0:k−1 , y1:k−1 )dθ. (34)

The resulting distribution is again a Student-t distribution, whose parameters can


again be computed by conditioning on the realization of the measurement noise
terms according to
(i) (i) (i) (i)
ek = d † (xk , uk )(yk − h(xk , uk )), (35)
The sufficient statistics update equations follow the same pattern. Here, † denotes
the pseudo-inverse.

4.3 Summarizing the Four Methods


The four methods are summarized in Table 1. The way in which a variable is
treated in the model is described in the table. A variable can be considered input
u, output (measurement) y, state x, process noise w or measurement noise e. The
dimension of the state vector, the input vector and the measurement vector are
nx , nu and ny , respectively. Note that the proposed method mpf-pn is the one
with the smallest state dimension.

4.4 Wheel Velocity


The principle for rotational speed sensors is that a toothed wheel is attached to
the rotating shaft. The teeth are also referred to as cogs, and the number of
cogs are denoted Ncog . A magnet attached to one side causes a variation in the
magnetic field that can be sensed by a Hall sensor. The variation is converted to
a square wave signal with constant amplitude, where each edge corresponds to
one edge of the toothed wheel.
The time between two or more edges is then registered and converted into angu-
lar speed as follows. The angle between each tooth is N2π . The time when tooth `
cog
is passing is denoted τ` , and the corresponding angle is denoted ϕ` = ` N2π . The
cog
296 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

angular speed ω(kT ) at time kT , where T is the sampling time and k is the time
step number, can now be approximated as
2π(`k − `k−1 )
ω̂(kT ) = . (36)
Ncog (τ`k − τ`k−1 )
Here, τ`k denotes the time when the last cog `k passed before time τ = kT . Note
that the angle can be recovered by summing ω̂(kT ) over time.
In the motion model (7) the angular speed ω is multiplied with the sample time T ,
which gives an angle. However, since the sensor initially measures the number
of cogs passed in a certain time according to equation (36) there is no need to
transform it into an angular velocity. It is more efficiently to instead use
2π(`k − `k−1 )
T ωk = , (37)
Ncog T
where `k − `k−1 is the number of cogs passed in the sampling time T . More details
about sampling and quantization effects in wheel speed sensors are described in
Gustafsson (2010b).

5 Results
In the experiments, the measurements were collected with a passenger car equipped
with standard vehicle sensors, such as wheel speed sensors, and a gps receiver,
see Figure 2. The vehicle is further equipped with an additional and more accu-
rate imu, besides the standard imu already mounted in the car, and an optical
velocity sensor. These two additional sensors were used to calibrate the setup,
but were not further used to produce the results presented.
In regions where the car moves at low velocities, the steering wheel angle mea-
surement was utilized as follows, in order to avoid quantization problems of the
wheel cogs
ω3 δ3 ω4 δ4
xk+1 = xk + T (vvirt
k + + ) cos ψk (38a)
2 2
ω3 δ3 ω4 δ4
yk+1 = yk + T (vvirt
k + + ) sin ψk (38b)
 2 2
virt ω4 δ4 ω3 δ3
ψk + T (ψ̇k + lt − lt ) if v > γ


ψk+1 = virt ω 3 δ3 ω 4 δ 4
(38c)
ψk + T δF (v +
 + )/lb if v < γ
k 2 2

The gps measurements of the 12 km test round is shown as a red solid line in
Figure 3. It is overlayed by the estimated trajectory, which is black-white dashed.
The photo is a very accurate flight photo (obtained from the Swedish mapping,
cadastral and land registration authority), which can be used as ground truth to
visualize the quality of the trajectory estimate. The round took about 18 min
to drive and it starts and ends in urban area of Linköping, in the upper right
corner in Figure 3. The test vehicle is driving clockwise, first on a small rural
5 Results 297

Figure 2: The test vehicle of Linköping University is logging standard can


data. The vehicle is in addition equipped with a gps receiver, an imu and an
optical velocity sensor.

road, and then on the left side of the figure entering a straight national highway,
before driving back to urban area on the top of the figure. The test was performed
two times, first with balanced tire pressure and thereafter with unbalanced tire
pressure, where the pressure of the rear left tire was released to be approximately
50% of the right tire pressure.
For the first round the pressure of the rear wheel tires was adjusted to be equal
2.8 bar on both tires. The estimated parameters, i.e., the mean and the covariance
for the left and the right wheel are shown in the Figure 4a and 4b, respectively.
It is clearly visible that the radius error is similar for the left and the right wheel
and and all the methods perform well in estimating the mean values µ3 and µ4
except the mpf-mn method, which makes a jump around time t = 540. The
performances of the methods differ in estimating the radius error difference be-
tween the left and right wheels which is shown in Figure 5. The proposed method,
mpf-pn, performs the best among the four estimators. The aug-pf, and mpf-mn
methods produce more erratic estimates than the mpf-pn and the aug-kf. The
peak value of the estimation error of mpf-pn is smaller than that of aug-kf.
For the second round the pressure of the rear left tire was released to 1.5 bar.
Comparing Figure 6a with Figure 6b it is visible that the pressure reduction leads
to a smaller µ3 than µ4 value. The behavior of the algorithms are similar to the
balanced case. The difference among the methods can be observed better in Fig-
ure 7, where the difference between the left and the right tire radii errors µ3 − µ4
is shown. Here again, The aug-pf, and mpf-mn methods produce more erratic
estimates. The mpf-pn and the aug-pf produces smoother estimates and the
mpf-pn results are better than aug-kf. All four methods reach a value of a rela-
298 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

500

1000
y [m]

1500

2000

2500

500 1000 1500 2000


x [m]

Figure 3: The red line is gps position measurements and the black-white
dashed line is the estimated driven trajectory. The experiment starts and
ends at a roundabout in the upper right corner. (©Lantmäteriet Medgivande
I2011/1405, reprinted with permission)
5 Results 299

0.02 0.02
MPF−PN MPF−PN
0.01 AUG−PF 0.01 AUG−PF
AUG−KF AUG−KF
0 MPF−MN 0 MPF−MN
µ3 [m]

µ4 [m]
−0.01 −0.01

−0.02 −0.02

−0.03 −0.03

−0.04 −0.04
100 200 300 400 500 600 700 800 900 1000 1100 100 200 300 400 500 600 700 800 900 1000 1100

5 5
10 10

0
10
0
10
−5
Σ3

Σ4
10
−5
10
−10
10

−15 −10
10 10
100 200 300 400 500 600 700 800 900 1000 1100 100 200 300 400 500 600 700 800 900 1000 1100
time [s] time [s]

(a) Left Wheel (b) Right Wheel

Figure 4: Tire radius error of the left and right rear wheels, in Figure (a) and
(b), respectively. The upper plot in each sub figure shows the mean values µ
and the lower plot the covariance estimates Σ. These plots show the situation
with balanced wheel pressure, and the radius error in Figure (a) and (b) is
very similar. The black solid line is the mpf-pn estimate, the gray solid line
is the aug-pf estimate, the black dashed line is the aug-kf estimate and the
gray dashed line is from the mpf-pn.

tive difference of approximately 1.5 mm.

In order to analyze the methods numerically an artificial data set, which simu-
lates a pressure drop, has been created. The data set with balanced wheels is used
as a basis for that purpose, and an artificial wheel speed is computed according
to
r
ω̄4 = ω4 (39)
r + δ̄4
where the artificial tire radius error is given by


− Kmax k < K2


δ̄4 =  2 . (40)
δmax
 k ≥ K2
The virtual measurements (4) are recalculated based on the artificial wheel speed
as follows.
ω r + ω̄4 r
v̄virt = 3 (41a)
2
ω̄ r − ω3 r
ψ̇¯ virt = 4 . (41b)
lt
For the example presented here, the values δmax = −2.5 · 10−3 m and K2 = 5808
where chosen. The simulated tire radii difference versus time, and its estimates
are plotted in Figure 8. Here again the mpf-pn produces the minimum minimum
root mean square error (rmse) among all methods. Further, we compare the
300 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

−3
x 10
1.5
MPF−PN
1 MPF−MN

0.5
µ3−µ4 [m] 0

−0.5

−1

−1.5
100 200 300 400 500 600 700 800 900 1000 1100
time [s]

−3
x 10
1.5
AUG−PF
1 AUG−KF

0.5
µ3−µ4 [m]

−0.5

−1

−1.5
100 200 300 400 500 600 700 800 900 1000 1100
time [s]

Figure 5: The figure shows the tire radius error difference between the left
and the right rear wheels, for the situation with balanced wheel pressures.
Since the pressure is equal in the left and the right wheel the radius differ-
ence between them is zero as expected.

average rmse of the algorithms over 100 Monte Carlo (mc) runs and present the
change in the rmse errors while varying the number of particles.
Next results are based on 100 mc runs for all methods (except for aug-kf, which
is deterministic). The effects of changing the number of particles is examined in
Figure 9a. The mc runs are repeated for 100, 500, 1000, and 5000 particles and
the average rmse of the difference between µ3 and µ4 is compared. This is per-
formed under the assumption that µ3 − µ4 is equal to the artificially added error,
according to (40), which serves as the true value. In Figure 9a the average rmse
are plotted with respect to the different number of particles. The proposed mpf-
pn estimate has the smallest rmse, and for instance the rmse for 100 particles
corresponds to the same rmse of the aug-pf using 500 particles. The aug-kf
approach produces here the worst rmse.
Some of the methods are sensitive to divergence when the number of particles
becomes to small. For this reason the divergence rate is analyzed for the three pf
based methods and plotted with respect to the number of particles in Figure 9b.
The algorithms are run on the complete 18 min data set and if the simulation
diverges during one mc run it is counted and compared to the total number of
100 mc runs. In the figure it is shown that the proposed mpf-pn approach and
the aug-kf approach always converges, whereas the aug-pf and the mpf-mn
approaches converges for all 100 mc runs when using at least 1000 particles.
Finally the average runtime per sample of a single mc run are given in Table 2.
Here it is obvious that the proposed process mpf-pn method and the aug-pf
6 Conclusion 301

0.02 0.02
MPF−PN MPF−PN
0.01 AUG−PF 0.01 AUG−PF
AUG−KF AUG−KF
0 MPF−MN 0 MPF−MN
µ3 [m]

µ [m]
−0.01 −0.01

4
−0.02 −0.02

−0.03 −0.03

−0.04 −0.04
100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000

5 5
10 10

0
10
0
10
−5
Σ3

Σ4
10
−5
10
−10
10

−15 −10
10 10
100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000
time [s] time [s]

(a) Left Wheel (b) Right Wheel

Figure 6: Tire radius error of the left and right rear wheels, in Figure (a) and
(b), respectively. The upper plot in each sub figure shows the mean values µ
and the lower plot the covariance estimates Σ. These plots show the situation
with unbalanced wheel pressure, and as expected the radius error of the left
wheel in Figure (a) is larger than the error of the right wheel in (b). The black
solid line is the mpf-pn estimate, the gray solid line is the aug-pf estimate,
the black dashed line the aug-kf estimate and the gray dashed line the mpf-
pn estimate.

Table 2: Computation Time [ms]

Number of particles
Method 100 500 1000 5000
mpf-pn 0.39 0.70 1.1 5.9
aug-pf 0.41 0.77 1.2 7.4
aug-kf – – 0.35 – –
mpf-mn 6.2 29 56 290

estimate are in the same order of complexity, since the computation times are
similar. The aug-kf approach is in the same order of magnitude as the aug-
pf with 100 particles, but as seen in Figure 9a, with a much worse rmse. The
mpf-mn is by far the slowest method.

6 Conclusion
In this study, we address the problem of joint estimation of unknown tire radii
and the trajectory of a four wheeled vehicle based on gps and wheel angular ve-
locity measurements. The problem is defined in a Bayesian framework and an
efficient method that utilizes marginalized particle filters is proposed in order to
accomplish the difficult task of joint parameter and state estimation. The algo-
302 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

−3
x 10
0
MPF−PN
−0.5 MPF−MN

−1
µ3−µ4 [m]
−1.5

−2

−2.5

−3
100 200 300 400 500 600 700 800 900 1000
time [s]

−3
x 10
0
AUG−PF
−0.5 AUG−KF

−1
µ3−µ4 [m]

−1.5

−2

−2.5

−3
100 200 300 400 500 600 700 800 900 1000
time [s]

Figure 7: The figure shows the tire radius error difference between the left
and the right rear wheels, for the situation with unbalanced wheel pressures.
The pressure of the left wheel is reduced by 50%, which leads to a radius
reduction of about 1.5 mm.

rithm is tested on real data experiments. The results show that it is possible to
estimate relative tire radius difference within sub-millimeter accuracy.
6 Conclusion 303

−3
x 10
0
MPF−PN
−0.5 MPF−MN
true
−1

µ3−µ4 [m] −1.5

−2

−2.5

−3
100 200 300 400 500 600 700 800 900 1000 1100
time [s]

−3
x 10
0
AUG−PF
−0.5 AUG−KF
true
−1
µ3−µ4 [m]

−1.5

−2

−2.5

−3
100 200 300 400 500 600 700 800 900 1000 1100
time [s]

Figure 8: The figure shows the tire radius error difference between the left
and the right rear wheels, for the situation with artificial wheel radii error of
−2.5 mm. The example is aimed at simulating a slow puncture. The resulting
estimates are close to the expected values.

−3
x 10
7 70
MPF−PN MPF−PN
AUG−PF AUG−PF
AUG−KF AUG−KF
6 MPF−MN 60 MPF−MN

5 50
Divergence rate [%]

4 40
RMSE

3 30

2 20

1 10

0 0
2 3 2 3
10 10 10 10
Number of particles Number of particles

(a) rmse (b) Divergence Rate

Figure 9: Figure (a) shows the rmse over number of particles. Note that,
the proposed mpf-pn approach has the same rmse value for 100 particles
as the aug-pf has for 500 particles. Figure 9b shows the divergence rate
over number of particles. The proposed mpf-pn approach did not diverge
at any of the mc runs, however the aug-pf converges only if more that 1000
particles are used.
304 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF

Bibliography
C. R. Carlson and J. C. Gerdes. Consistent nonlinear estimation of longitudinal
tire stiffness and effective radius. IEEE Transactions on Control Systems Tech-
nology, 13(6):1010–1020, November 2005.
A. Censi, L. Marchionni, and G. Oriolo. Simultaneous maximum-likelihood cali-
bration of odometry and sensor parameters. In Proceedings of the IEEE Inter-
national Conference on Robotics and Automation, pages 2098–2103, Pasadena,
Canada, May 2008.
V. E. Ersanilli, P. J. Reeve, K. J. Burnham, and P. J. King. A continuous-time
model-based tyre fault detection algorithm utilising a Kalman state estimator
approach. In Proceedings of the 7th Workshop on Advanced Control and Di-
agnosis, Zielona Góra, Poland, November 2009.
F. Gustafsson. Statistical Sensor Fusion. Studentlitteratur, Lund, Sweden, 2010a.
F. Gustafsson. Rotational speed sensors: Limitations, pre-processing and auto-
motive applications. IEEE Instrumentation & Measurement Magazine, 13(2):
16–23, March 2010b.
F. Gustafsson, S. Ahlqvist, U. Forssell, and Ni. Persson. Sensor fusion for accu-
rate computation of yaw rate and absolute velocity. In Proceedings of the SAE
World Congress, SAE paper 2001-01-1064, Detroit, MI, USA, April 2001.
S. Julier and H. Durrant-Whyte. Process models for the high-speed navigation of
road vehicles. In Proceedings of the IEEE International Conference on Robotics
and Automation, volume 1, pages 101–105, May 1995.
M. Kárný. Optimized Bayesian Dynamic Advising: Theory and Algorithms.
Springer, London, 2006. ISBN 978-1-84628-254-6.
J. Liu and M. West. Combined parameter and state estimation in simulation-
based filtering. In A. Doucet, N. De Freitas, and N. Gordon, editors, Sequential
Monte Carlo Methods in Practice. Springer, 2001.
C. Lundquist, E. Özkan, and F. Gustafsson. Tire radii estimation using a marginal-
ized particle filter. IEEE Transactions on Intelligent Transportation Systems,
2011. Submitted.
S. L. Miller, B. Youngberg, A. Millie, P. Schweizer, and J. C. Gerdes. Calculating
longitudinal wheel slip and tire parameters using gps velocity. In IEEE Amer-
ican Control Conference, volume 3, pages 1800–1805, June 2001.
N.K. M’sirdi, A. Rabhi, L. Fridman, J. Davila, and Y. Delanne. Second order slid-
ing mode observer for estimation of velocities, wheel sleep, radius and stiffness.
In IEEE American Control Conference, pages 3316–3321, June 2006.
E. Özkan, C. Lundquist, and F. Gustafsson. A Bayesian approach to jointly esti-
mate tire radii and vehicle trajectory. In Proceedings of the IEEE Conference
on Intelligent Transportation Systems, Washington DC, USA, October 2011.
Bibliography 305

S. Patwardhan, H.-S. Tan, and M. Tomizuka. Experimental results of a tire-burst


controller for ahs. Control Engineering Practice, 5(11):1615–1622, 1997.
N. Persson, S. Ahlqvist, U. Forssell, and F. Gustafsson. Low tire pressure warning
system using sensor fusion. In Proceedings of the Automotive and Transporta-
tion Technology Congress, SAE paper 2001-01-3337, Barcelona, Spain, October
2001.
V. Peterka. Bayesian system identification. Automatica, 17(1):41–53, January
1981.
S. Saha, E. Özkan, F. Gustafsson, and V. Smidl. Marginalized particle filters for
Bayesian estimation of Gaussian noise. In Proceedings of the International
Conference on Information Fusion, Edinburgh, Scotland, July 2010.
H. Shraim, A. Rabhi, M. Ouladsine, N.K. M’Sirdi, and L. Fridman. Estimation and
analysis of the tire pressure effects on the comportment of the vehicle center
of gravity. In International Workshop on Variable Structure Systems, pages
268–273, June 2006.
S. Velupillai and L. Guvenc. Tire pressure monitoring. IEEE Control Systems
Magazine, 27(6):22–25, December 2007.
306 Paper G Tire Radii and Vehicle Trajectory Estimation Using a Marginalized PF
Index

active contours, 168 gating, 47, 177


adaptive cruise control, 99 Gaussian mixture phd filter, 70, 201, 213
averaging curvature model, 108 algorithm, 73
assumptions, 230
Bayes filter equations, 35 extended target, 56, 229, 270
Bayes’ theorem, 32 merging, 74, 206
belief-mass, 59 spawn process, 213
bicycle model, 22, 141 global nearest neighbor, 49
bin-occupancy density, 123, 198
Harris corner, 104
can, 5, 26 hidden Markov model, 30
car to car communication, 100 Hough transform, 111
clutter, 47
confirmed track, 50 inertial measurement unit, 8
coordinated turn model, 25 intensity based mapping, 198
exact discretization, 27 intensity filter, 70
inverse depth parameterization, 104
data association, 47, 177 K-means, 203
deleted track, 50 Kalman filter, 36
phd filter, 72
emergency lane assist, 99
algorithm, 37
errors in output
errors in variables, 173
measurement model, 172
extended target tracking, 232, 267
errors in variables, 168
measurement model, 172 landmark, 103
extended Kalman filter, 38 lane keeping assistance, 99
algorithm, 38 lane tracking, 110
extended target least squares, 33
definition, 52, 227 algorithm, 34
overview, 51, 167 likelihood function, 30
exteroceptive sensors, 5 likelihood ratio test, 56, 179
location based map, 101
feature based map, 103
finite set statistics, 58 M/N logic, 51

307
308 Index

mapping, 100 extended target, 227, 263


marginalized particle filter, 292 regression clustering, 203
Markov property, 29 rmse, 152, 174, 299
maximum a posteriori, 35 road map, 107
maximum likelihood, 35, 237 mapping the edges, 114
mean absolute error, 183 mapping the lanes, 110
measurement generating point, 53, 56, road model, 107
262
measurement model, 8 sensor model, 8
Monte Carlo, 41, 174, 300 sequential Monte Carlo
motion model, 8 phd filter, 70
particle filter, 41
nearest neighbor, 49 sequential probability ratio test, 51
normalized pinhole projection, 105 simple clothoid model, 108
single track model, 22, 141
observation space, 46 linearized, 28
occupancy grid map, 118, 215 slam, 5
optical lane recognition, 152 phd filter, 262
optimal subpattern assignment metric, spline, 274
243 state space, 46

particle filter, 41, 288 tentative track, 50


algorithm, 43 track management, 50
noise marginalization, 290 transition density, 29
Rao-Blackwellized, 270
partitioning, 233 unscented Kalman filter, 39
distance partitioning, 233 gm-phd filter, 74
K-means, 236 algorithm, 40
K-means++, 236 errors in variables, 174
sub-partitioning, 237
phd virtual measurement, 285
cphd, 68
definition, 63 weighted least squares, 33
filter, 64 algorithm, 34
Gaussian mixture, 70 errors in variables, 173
smoother, 70 wheel speed sensor, 295
pixel coordinates, 106
probability density function, 29, 138
probability hypothesis density filter, 64
Gaussian mixture, 70
sequential Monte Carlo, 70
process model, 8
proprioceptive sensors, 5

random finite set, 57


approximation, 63
definition, 59
PhD Dissertations
Division of Automatic Control
Linköping University

M. Millnert: Identification and control of systems subject to abrupt changes. Thesis


No. 82, 1982. ISBN 91-7372-542-0.
A. J. M. van Overbeek: On-line structure selection for the identification of multivariable
systems. Thesis No. 86, 1982. ISBN 91-7372-586-2.
B. Bengtsson: On some control problems for queues. Thesis No. 87, 1982. ISBN 91-7372-
593-5.
S. Ljung: Fast algorithms for integral equations and least squares identification problems.
Thesis No. 93, 1983. ISBN 91-7372-641-9.
H. Jonson: A Newton method for solving non-linear optimal control problems with gen-
eral constraints. Thesis No. 104, 1983. ISBN 91-7372-718-0.
E. Trulsson: Adaptive control based on explicit criterion minimization. Thesis No. 106,
1983. ISBN 91-7372-728-8.
K. Nordström: Uncertainty, robustness and sensitivity reduction in the design of single
input control systems. Thesis No. 162, 1987. ISBN 91-7870-170-8.
B. Wahlberg: On the identification and approximation of linear systems. Thesis No. 163,
1987. ISBN 91-7870-175-9.
S. Gunnarsson: Frequency domain aspects of modeling and control in adaptive systems.
Thesis No. 194, 1988. ISBN 91-7870-380-8.
A. Isaksson: On system identification in one and two dimensions with signal processing
applications. Thesis No. 196, 1988. ISBN 91-7870-383-2.
M. Viberg: Subspace fitting concepts in sensor array processing. Thesis No. 217, 1989.
ISBN 91-7870-529-0.
K. Forsman: Constructive commutative algebra in nonlinear control theory. Thesis
No. 261, 1991. ISBN 91-7870-827-3.
F. Gustafsson: Estimation of discrete parameters in linear systems. Thesis No. 271, 1992.
ISBN 91-7870-876-1.
P. Nagy: Tools for knowledge-based signal processing with applications to system identi-
fication. Thesis No. 280, 1992. ISBN 91-7870-962-8.
T. Svensson: Mathematical tools and software for analysis and design of nonlinear control
systems. Thesis No. 285, 1992. ISBN 91-7870-989-X.
S. Andersson: On dimension reduction in sensor array signal processing. Thesis No. 290,
1992. ISBN 91-7871-015-4.
H. Hjalmarsson: Aspects on incomplete modeling in system identification. Thesis No. 298,
1993. ISBN 91-7871-070-7.
I. Klein: Automatic synthesis of sequential control schemes. Thesis No. 305, 1993.
ISBN 91-7871-090-1.
J.-E. Strömberg: A mode switching modelling philosophy. Thesis No. 353, 1994. ISBN 91-
7871-430-3.
K. Wang Chen: Transformation and symbolic calculations in filtering and control. Thesis
No. 361, 1994. ISBN 91-7871-467-2.
T. McKelvey: Identification of state-space models from time and frequency data. Thesis
No. 380, 1995. ISBN 91-7871-531-8.
J. Sjöberg: Non-linear system identification with neural networks. Thesis No. 381, 1995.
ISBN 91-7871-534-2.
R. Germundsson: Symbolic systems – theory, computation and applications. Thesis
No. 389, 1995. ISBN 91-7871-578-4.
P. Pucar: Modeling and segmentation using multiple models. Thesis No. 405, 1995.
ISBN 91-7871-627-6.
H. Fortell: Algebraic approaches to normal forms and zero dynamics. Thesis No. 407,
1995. ISBN 91-7871-629-2.
A. Helmersson: Methods for robust gain scheduling. Thesis No. 406, 1995. ISBN 91-7871-
628-4.
P. Lindskog: Methods, algorithms and tools for system identification based on prior
knowledge. Thesis No. 436, 1996. ISBN 91-7871-424-8.
J. Gunnarsson: Symbolic methods and tools for discrete event dynamic systems. Thesis
No. 477, 1997. ISBN 91-7871-917-8.
M. Jirstrand: Constructive methods for inequality constraints in control. Thesis No. 527,
1998. ISBN 91-7219-187-2.
U. Forssell: Closed-loop identification: Methods, theory, and applications. Thesis No. 566,
1999. ISBN 91-7219-432-4.
A. Stenman: Model on demand: Algorithms, analysis and applications. Thesis No. 571,
1999. ISBN 91-7219-450-2.
N. Bergman: Recursive Bayesian estimation: Navigation and tracking applications. Thesis
No. 579, 1999. ISBN 91-7219-473-1.
K. Edström: Switched bond graphs: Simulation and analysis. Thesis No. 586, 1999.
ISBN 91-7219-493-6.
M. Larsson: Behavioral and structural model based approaches to discrete diagnosis. The-
sis No. 608, 1999. ISBN 91-7219-615-5.
F. Gunnarsson: Power control in cellular radio systems: Analysis, design and estimation.
Thesis No. 623, 2000. ISBN 91-7219-689-0.
V. Einarsson: Model checking methods for mode switching systems. Thesis No. 652, 2000.
ISBN 91-7219-836-2.
M. Norrlöf: Iterative learning control: Analysis, design, and experiments. Thesis No. 653,
2000. ISBN 91-7219-837-0.
F. Tjärnström: Variance expressions and model reduction in system identification. Thesis
No. 730, 2002. ISBN 91-7373-253-2.
J. Löfberg: Minimax approaches to robust model predictive control. Thesis No. 812, 2003.
ISBN 91-7373-622-8.
J. Roll: Local and piecewise affine approaches to system identification. Thesis No. 802,
2003. ISBN 91-7373-608-2.
J. Elbornsson: Analysis, estimation and compensation of mismatch effects in A/D convert-
ers. Thesis No. 811, 2003. ISBN 91-7373-621-X.
O. Härkegård: Backstepping and control allocation with applications to flight control.
Thesis No. 820, 2003. ISBN 91-7373-647-3.
R. Wallin: Optimization algorithms for system analysis and identification. Thesis No. 919,
2004. ISBN 91-85297-19-4.
D. Lindgren: Projection methods for classification and identification. Thesis No. 915,
2005. ISBN 91-85297-06-2.
R. Karlsson: Particle Filtering for Positioning and Tracking Applications. Thesis No. 924,
2005. ISBN 91-85297-34-8.
J. Jansson: Collision Avoidance Theory with Applications to Automotive Collision Mitiga-
tion. Thesis No. 950, 2005. ISBN 91-85299-45-6.
E. Geijer Lundin: Uplink Load in CDMA Cellular Radio Systems. Thesis No. 977, 2005.
ISBN 91-85457-49-3.
M. Enqvist: Linear Models of Nonlinear Systems. Thesis No. 985, 2005. ISBN 91-85457-
64-7.
T. B. Schön: Estimation of Nonlinear Dynamic Systems — Theory and Applications. The-
sis No. 998, 2006. ISBN 91-85497-03-7.
I. Lind: Regressor and Structure Selection — Uses of ANOVA in System Identification.
Thesis No. 1012, 2006. ISBN 91-85523-98-4.
J. Gillberg: Frequency Domain Identification of Continuous-Time Systems Reconstruc-
tion and Robustness. Thesis No. 1031, 2006. ISBN 91-85523-34-8.
M. Gerdin: Identification and Estimation for Models Described by Differential-Algebraic
Equations. Thesis No. 1046, 2006. ISBN 91-85643-87-4.
C. Grönwall: Ground Object Recognition using Laser Radar Data – Geometric Fitting,
Performance Analysis, and Applications. Thesis No. 1055, 2006. ISBN 91-85643-53-X.
A. Eidehall: Tracking and threat assessment for automotive collision avoidance. Thesis
No. 1066, 2007. ISBN 91-85643-10-6.
F. Eng: Non-Uniform Sampling in Statistical Signal Processing. Thesis No. 1082, 2007.
ISBN 978-91-85715-49-7.
E. Wernholt: Multivariable Frequency-Domain Identification of Industrial Robots. Thesis
No. 1138, 2007. ISBN 978-91-85895-72-4.
D. Axehill: Integer Quadratic Programming for Control and Communication. Thesis
No. 1158, 2008. ISBN 978-91-85523-03-0.
G. Hendeby: Performance and Implementation Aspects of Nonlinear Filtering. Thesis
No. 1161, 2008. ISBN 978-91-7393-979-9.
J. Sjöberg: Optimal Control and Model Reduction of Nonlinear DAE Models. Thesis
No. 1166, 2008. ISBN 978-91-7393-964-5.
D. Törnqvist: Estimation and Detection with Applications to Navigation. Thesis No. 1216,
2008. ISBN 978-91-7393-785-6.
P-J. Nordlund: Efficient Estimation and Detection Methods for Airborne Applications.
Thesis No. 1231, 2008. ISBN 978-91-7393-720-7.
H. Tidefelt: Differential-algebraic equations and matrix-valued singular perturbation.
Thesis No. 1292, 2009. ISBN 978-91-7393-479-4.
H. Ohlsson: Regularization for Sparseness and Smoothness — Applications in System
Identification and Signal Processing. Thesis No. 1351, 2010. ISBN 978-91-7393-287-5.
S. Moberg: Modeling and Control of Flexible Manipulators. Thesis No. 1349, 2010.
ISBN 978-91-7393-289-9.
J. Wallén: Estimation-based iterative learning control. Thesis No. 1358, 2011. ISBN 978-
91-7393-255-4.
J. Hol: Sensor Fusion and Calibration of Inertial Sensors, Vision, Ultra-Wideband and GPS.
Thesis No. 1368, 2011. ISBN 978-91-7393-197-7.
D. Ankelhed: On the Design of Low Order H-infinity Controllers. Thesis No. 1371, 2011.
ISBN 978-91-7393-157-1.

You might also like