The Highd Dataset: A Drone Dataset of Naturalistic VehicleTrajectories On German Highways For Validation of Highly Automated Driving Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

IEEE 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, Hawaii, USA, November 2018

The highD Dataset: A Drone Dataset of Naturalistic Vehicle


Trajectories on German Highways for Validation of Highly
Automated Driving Systems
Robert Krajewski, Julian Bock, Laurent Kloeker and Lutz Eckstein

Figure 1. Example of a recorded highway including bounding boxes and labels of detected vehicles. The color of the bounding boxes indicates the class of
the detected object (car: yellow, truck: green). Every vehicle is assigned a unique id for tracking and its speed is estimated over time.

Abstract— Scenario-based testing for the safety validation of structure the description of scenarios on German highways [5]
highly automated vehicles is a promising approach that is being within the PEGASUS project.
examined in research and industry. This approach heavily relies
on data from real-world scenarios to derive the necessary Common data sources for safety validation are driving
scenario information for testing. Measurement data should be tests, naturalistic driving studies (NDS), field operational tests
collected at a reasonable effort, contain naturalistic behavior of (FOT) and pilot studies [1]. Test vehicles or series-production
road users and include all data relevant for a description of the vehicles equipped with sensors are used to measure the
identified scenarios in sufficient quality. However, the current vehicle´s environment and record the CAN-bus data. A newer
measurement methods fail to meet at least one of the approach is the use of infrastructure sensors installed at
requirements. Thus, we propose a novel method to measure data dedicated roadside masts [6] or at street lights permanently
from an aerial perspective for scenario-based validation monitoring a certain road segment. However, those
fulfilling the mentioned requirements. Furthermore, we provide measurement methods come with several weaknesses. The
a large-scale naturalistic vehicle trajectory dataset from German necessary quality of the dynamic scenario description and
highways called highD. We evaluate the data in terms of naturalistic behavior of other road users are not always given
quantity, variety and contained scenarios. Our dataset consists because of the sensors’ physical limitations and the visibility
of 16.5 hours of measurements from six locations with 110 000 of the sensors.
vehicles, a total driven distance of 45 000 km and 5600 recorded
complete lane changes. The highD dataset is available online at: Thus, we propose to use camera-equipped drones to
http://www.highD-dataset.com measure every vehicle’s position and movements from an
aerial perspective for scenario-based validation. Drones with
I. INTRODUCTION high-resolution cameras have the advantage of capturing the
traffic from a so-called bird’s eye view with high longitudinal
A technical proof of concept for highly automated driving
and lateral accuracy. From this perspective, information about
(HAD) has already been shown in many demonstrations and
object heights is lost, but vehicles cannot be occluded by other
test drives. However, existing methods and tools for the safety
vehicles. However, an object’s height has only limited
validation process are not suitable for the complexity of these
relevance for safety validation and can be estimated from the
systems and would be inefficient with regard to costs and time
object type. In altitudes from 100 m up to several hundred
resources [1]. Projects for safety validation and assurance of
meters, the drone is hardly visible from a passing vehicle,
highly automated vehicles such as PEGASUS [2] and
which results in completely uninfluenced, naturalistic driving
ENABLE-S3 [3] aim to develop a suitable process based on
behavior. In our case, a drone was hovering next to German
scenarios. A scenario-based approach is also used when
highways and the recordings cover a road segment of about
conducting impact assessment of automated driving systems
420 m as displayed in Fig. 3. We use the common term drone
[4]. These approaches heavily rely on measurement data from
throughout the paper for Unmanned Aerial Vehicle, which in
real-world traffic for extracting, describing and analyzing
our case is a multicopter.
scenarios. In order to cope with the complexity and required
level of detail necessary to describe scenarios, five scenario Within this paper, we show the feasibility of this approach
description layers, which are shown in Fig. 2, were defined to and analyze the used methods. Furthermore, we provide a
large-scale dataset of naturalistic vehicle trajectories on

The authors are with the Automated Driving Department, Institute for
Automotive Engineering RWTH Aachen University (Aachen, Germany).
(E-mail: {krajewski, bock, kloeker, eckstein}@ika.rwth-aachen.de).

© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers
or lists, or reuse of any copyrighted component of this work in other works.
IEEE 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, Hawaii, USA, November 2018

Street level (L1) B. Datasets for Automated Driving


• Geometry and topology There have been several projects dealing with the
• Condition, boundaries collection of driving data recorded with onboard sensors
Traffic infrastructure (L2)
within the last ten years [14–17]. In Europe, the project
EuroFOT with funding from the European Commission was
• Construction barriers
• Signs, traffic guidance
one of the first large-scale FOTs and ended in 2012. The data
of more than 35 million driven kilometers were collected by
Temporal modifications to L1 and L2 (L3) around 1.200 drivers [17]. The data contains recordings of
• Geometry and topology overlay onboard CAN-bus, raw video, GPS position, front-facing
• Time dependent > 1 day radar and camera. The data is still used in research of impact
assessment and safety validation [4]. In the United States, a
Movable objects (L4) naturalistic driving study was performed within the second
• Dynamic, movable Strategic Highway Research Program (SHRP 2). 3150
• Interactions, maneuvers
volunteers used their vehicles to record 79.7 kilometers
Environment conditions (L5) between 2010 and 2012. The recordings contain data of front-
• Influence on properties of other levels facing radar, raw-video, vehicle bus data and video of the
driver [15]. However, both datasets are not freely available to
the public.
Figure 2. 5-Layer model for the description of scenarios as in [5].
German highways called highD, which stands for highway In the last few years, several public datasets such as the
drone dataset. We compare the highD dataset with other Next Generation SIMulation (NGSIM) dataset [18, 19], KITTI
datasets that are used in research. Though the dataset is [20, 21] and Cityscapes [22] were published to foster research
originally intended for safety validation and impact on automated driving. Although NGSIM was not originally
assessment, we also want to foster research on traffic intended for automated driving but traffic simulation [18, 19],
simulation models, traffic analysis, driver models, road user the dataset is now used for automated driving research [23].
prediction models and further topics, which rely on naturalistic As the KITTI and Cityscapes dataset contain single annotated
traffic trajectory data. images from vehicle onboard cameras, these datasets are
mainly utilized for development of computer vision algorithms
such as object detection and scene understanding.
II. PREVIOUS WORK
Within this chapter, we first analyze previous work on the use Beside the image set, the KITTI dataset also includes data
of drones as sensors for traffic monitoring. Subsequently, we from laser scanners and object tracks. However, Cityscapes
and KITTI mainly focus on urban traffic scenes, while KITTI
provide an overview on existing datasets for automated
also contains a few highway traffic scenes. Thus, both datasets
driving with focus on the use for safety validation. Because of have very little relevance for highways scenarios. NGSIM in
its closeness to highD, we analyze one of the datasets in detail. contrast, focuses on vehicle trajectories on highways and
A. Drones for Recording Road Users urban traffic roads captured from tall buildings resulting in a
bird’s eye view. NGSIM is the most similar dataset to the
In 2005, the use of video data from camera-equipped
herewith-introduced highD dataset. Thus, we analyze NGSIM
drones for traffic monitoring was already examined [7, 8].
in greater detail and compare highD to NGSIM.
However, most of the work had the goal to extract
macroscopic data as traffic density, traffic flow and traffic C. NGSIM Dataset
speed [9–12]. As the positions of road users were not extracted NGSIM is today’s largest dataset of naturalistic vehicle
with decimeter-accuracy, the resulting trajectories are not trajectories and is widely used for research on traffic flow and
suitable for the safety validation of highly automated vehicles. driver models [24]. The U.S. Department of Transportation
With the Stanford Drone Dataset [13], a first public dataset
with trajectories of multiple road users was created from drone
video data. The dataset is intended for the development of
pedestrian behavior and interaction models. Recordings were
made at eight locations on the Stanford Campus and do not
contain public roads. All recordings sum up to a total duration
of about 17 hours. Although the dataset contains cars and
buses, they account for less than five percent of all road users
at seven of the eight locations. At one location, cars account
for about 30 percent of the road users, but most of them are
parked. Thus, the dataset is not appropriate for safety
validation.
To the best of our knowledge, the applicability of aerial
video data for safety validation of highly automated vehicles
has not been shown yet. Furthermore, currently no public
Figure 3. The recording setup includes a drone that hovers next to German
trajectory dataset of vehicle trajectories on highways created highways and captures traffic from a bird’s eye view on a road section with
with drone video data exists. a length of about 420 m.
IEEE 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, Hawaii, USA, November 2018

Intelligent Transportation Systems Joint Program Office (JPO) • Effort effectiveness: The total effort consists of the
collected video data of traffic in a period from 2005 to 2006. initial effort for setting up the measurement method
The dataset includes four different recording sites: The and permanent effort for operation. Effort
Interstate 80 (I-80) in Emeryville, CA, the US Highway 101 effectiveness is the ratio of measured scenarios over
(US 101) in Los Angeles, CA, the Lankershim Boulevard (LB) combined permanent and initial effort.
in Los Angeles, CA and the Peachtree Street (PS) in Atlanta,
GA. While the highways at the recording sites I-80 and US- • Flexibility: Measurements should ideally cover every
101 are comparable to the German Autobahn, the recordings variance of traffic. Thus, the data should not be limited
at LB and PS contain urban scenes. Therefore, we only to a certain road segment but should capture data at
consider I-80 and US-101 in the following. At each site, every time of the day and every environmental
multiple synchronized video cameras were located on top of condition.
an adjacent multistory building recording different B. Comparison of Measurement Methods
overlapping road segments, covering between 500 m and
640 m. The recordings have a total duration of 90 minutes. In the following, we compare the drone-based approach
Next to I-80, seven cameras were installed on top of a with existing measurement methods in terms of the five
multistory building in 97 m height, whereas at US 101 study requirements. The comparison is displayed as radar chart in
area, eight cameras were installed on top of an adjacent Fig. 4. As there exist several measurement campaign setups
multistory building with a height of 154 m [25]. Tilted video for the vehicle onboard measurement, which might change in
camera alignments were needed to cover the whole study area. future, we make the following assumptions. First, we consider
an NDS setup for vehicles with series-production sensors and
As shown in previous works [24, 26], raw NGSIM we assume that a fused environment model exists only for the
trajectories cannot be used for further analyses. False positive front side of the vehicle. Second, for the vehicles equipped
trajectory collisions and physically illogical vehicle speeds with HAD sensors we consider a pilot study and the
and accelerations happen to occur in the dataset. To eliminate availability of a 360-degree environment model based on
erroneous trajectory behavior, [26] refines the longitudinal cameras, laser scanners and radar sensors. For the pilot study,
vehicle movements for a part of the dataset using the we further assume that the driver is permanently aware of the
trajectories itself. Apart from this, [24] shows that this method measurements and the test vehicle is recognized as such from
is not sufficient for every case and that vehicles first have to the outside by other road users, e.g. due to additionally
be manually re-extracted from the recordings to get improved mounted sensors.
longitudinal trajectories.
The Naturalistic behavior is preserved the best from the
aerial perspective as no road user is aware of the measurement.
III. ANALYSIS OF MEASUREMENT METHODS FOR SCENARIO- For the NDS study, it can be assumed that the surrounding road
BASED SAFETY VALIDATION users are not aware of the measurement and behave
A. Requirements on Measurement Methods uninfluenced. However, the recorded behavior might not be
completely naturalistic since the driver is aware of the
In order to collect data suitable for use in scenario-based
recording while driving and the drivers might not truly
safety validation, an appropriate measurement method must be
represent the real driver population. In the pilot study, the
used. In general, the procedure must make it possible to
capture all relevant facets of traffic with sufficient accuracy.
Although, the specific requirements depend on the desired Naturalistic
application, we derived the following five general behavior
5
requirements:
4
• Naturalistic behavior: The behavior of all road users 3
must be naturalistic and uninfluenced by the 2 Static scenario
Flexibility
measurement. Ideally, every road user is unaware of 1 description
the measurement and thus uninfluenced in its 0
behavior.
• Static scenario description: Information belonging to
the first three layers of the 5-layer scenario description
model [5] including e.g. number of lanes, lane width, Dynamic
Effort
scenario
speed limits and road curvature must be captured. effectiveness
description
• Dynamic scenario description: Information belonging
to the fourth layer of the 5-layer scenario description
model [5] describing the road users’ movements must Drone
be included in the data. Road users must not be left out Vehicle with series-production sensors
because of occlusion but their positions and
Vehicle with HAD sensors
movements must be measured accurately. Finally,
also the data should contain all information regarding Infrastructure sensors
the fifth layer of the 5-layer model, which represents
environmental conditions. Figure 4. Comparison of measurement methods regarding the use for safety
validation of highly automated driving.
IEEE 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, Hawaii, USA, November 2018

behavior of all road users might be influenced as the test operators. Furthermore, the location cannot easily be changed
vehicles can typically be recognized as such. The untypical once installed.
appearance of e.g. external-mounted sensors might influence
the behavior of the drivers around the measurement vehicle. In addition to the advantages and limitations described
Roadside infrastructure sensors can generate an accurate above, the aerial perspective has drawbacks in terms of online
overview of traffic in the observed area. However, the sensors processing. Accurate measurements demand high video
are perceived by drivers and might be confused with traffic resolutions, requiring strong algorithms and unavailable high
enforcement cameras leading to atypical driving behavior. processing power. However, online processing is not
necessary for our purpose to create a dataset. Finally, the aerial
The static scenario description can be derived from digital perspective has advantages in terms of data privacy protection.
map data when infrastructure sensors or drones are used, since Neither the trajectory data nor the raw video data taken by
both methods are used at a limited number of fixed locations. drones are critical in terms of privacy and data protection as no
Additionally, static scenario information can be extracted from road user can be identified from high altitudes. Data recorded
the aerial perspective. HAD sensors provide highly accurate with vehicle onboard sensors are sensitive in terms of privacy
localization and comprehensive detections of e.g. lane and data protection, as private information such as the location
markings for static scenario description. Finally, NDS might or movement patterns can be inferred from the data over time.
only include inaccurate ego-localization or only simple If cameras are used as infrastructure sensors, they might
information, such as the lane markings of the current lane recognize license plates or even faces. Thus, the raw video data
perceived by sensors. of those cameras are problematic from a data protection point
of view.
A high quality dynamic scenario description can be
achieved from the aerial perspective. All vehicles on every Summarized, the aerial perspective has several strengths in
lane in both driving directions can be perceived with constant terms of naturalistic driving behavior, static and dynamic
high accuracy. For onboard measurement, the vehicles must scenario description as well as data privacy protection.
be equipped with appropriate sensors for every sensing Weaknesses are the flexibility compared to vehicle onboard
direction. The data quality of current series-production sensors measurement and effort effectiveness compared to onboard
is typically not sufficient and their data typically hard to measurement with series-production sensors.
access. Furthermore, they can only capture the environment to
a comparable limited extent and thus, the scenario cannot be IV. THE HIGHD DATASET COLLECTION PIPELINE
fully described. Vehicles with HAD sensors have 360-degree
environmental sensing, but measurement ranges are limited A. highD at a Glance
and accuracy is still decreasing with distance. The perception The dataset includes post-processed trajectories of 110 000
of several sensors must be fused using sensor fusion vehicles including cars and trucks extracted from drone video
algorithms. Infrastructure sensors can accurately measure the recordings at German highways around Cologne (see Fig. 5)
positions and movements of every object on a certain road during 2017 and 2018. At six different locations, 60 recordings
segment. However, objects passing by close to the sensors can were made with an average length of 17 minutes (16.5 h in
still occlude other objects. total) covering a road segment of about 420 m length. Each
vehicle is visible for a median duration of 13.6 s. From these
A NDS has a very high effort effectiveness as only minor
recordings, we automatically extracted the vehicles using
vehicle modifications might be needed, and the vehicle is
computer vision algorithms and annotated the infrastructure
operated as if there was no measurement. Road-side
manually.
infrastructure sensors are also highly effort effective in
operation but require a high initial effort for installation. After The dataset can be downloaded from http://www.highD-
the initial effort for flight approval, a drone must be operated dataset.com, while Matlab and Python source code to handle
by an experienced drone pilot. The drone pilot must also drive the data, create visualizations and extract maneuvers is
to the desired measurement location. However, a driven provided at https://www.github.com/RobertKrajewski/highD-
distance of more than 1000 km can be recorded by a drone. dataset.
The pilot study with HAD sensors comes with high initial
effort for setting up the vehicle and selecting the drivers. B. Video Recordings
Typically, the vehicle must be maintained and checked The videos were recorded in 4K (4096x2160) resolution at
regularly. In the future, the measurement data from vehicles 25 fps and were saved at the highest possible quality using the
with HAD sensors will become more effort effective when consumer quadcopter DJI Phantom 4 Pro Plus. The drone
generated unobtrusively with regular series production hovered directly next to German highways to minimize
vehicles. perspective distortions and to record as little of the vehicle
sidewalls as possible. The size of a single pixel on the road
In comparison, the flexibility of measurement vehicles is surface is about 10x10 cm. The recordings only took place
the highest as they can typically drive on any road and under during sunny and windless weather from 8 AM to 5 PM to
almost any environmental condition. Drones are basically maximize the quality of the recordings and to minimize the
flexible with regard to the measurement location, but legal need for stabilization caused by movements. Although the
flight restrictions and environmental conditions are limiting quadcopter uses flight stabilization and a gimbal-based camera
the measurements to daytime and calm weather conditions. stabilization, translations and rotations could not be avoided
Infrastructure sensors can operate in most environmental completely. Therefore, videos were stabilized using OpenCV
conditions and at most locations, but the installation must be by estimating transformations that map the background in each
approved and coordinated in close cooperation with the road frame to the background in the first frame of the corresponding
IEEE 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, Hawaii, USA, November 2018

recording. Furthermore, the first frame was rotated so that the using a default driver. Critical maneuvers are detected by the
lane markings are horizontal. Because of these rules defined in [30]. The full list of detected maneuvers is:
transformations, the actual length of the recorded highway
section varies slightly in each frame. • Free Driving (Longitudinal uninfluenced driving): Driving
without being influenced by a preceding vehicle
C. Static and Dynamic Object Annotation • Vehicle Following (Longitudinal influenced driving):
As more than 110 000 vehicles are included in this dataset, Actively following another vehicle
a manual annotation was not feasible. Thus, an algorithmic • Critical Maneuver: Low Time to Collision (TTC) or Time
approach was chosen, which is based on state-of-the-art Headway (THW) to a preceding vehicle
computer vision algorithms. We decided to use an adaptation • Lane Change: Crossing lane markings and staying on a
of the U-Net [27], which is a common neural network new lane
architecture for semantic segmentation. The network estimates Also, the ID, the Distance Headway (DHW), THW and
for every pixel of each frame, if it belongs to a vehicle or to TTC of preceding and following vehicles on the own and
the background. The resulting segmentation map is used to adjacent lanes are derived for each vehicle. We provide the
create bounding boxes by detecting pixel clusters belonging to scripts for the extraction of these scenarios from the dataset to
vehicles. Static objects such as lane markings, traffic signs and ease the adjustment of the parameters or the maneuvers.
speed limits were annotated manually, as the effort is
F. Dataset Format
negligible in comparison to the annotation of the vehicles.
The dataset includes four files for each recording: An aerial
D. Track Postprocessing shot of the specific highway area and three CSV files,
As the detection runs on each frame independently, a containing information about the site, the vehicles and the
tracking algorithm was necessary to connect detections in extracted trajectories. The first file includes the location of the
consecutive frames to tracks. During this process, detections site, driving lanes, traffic signs and speed limits on each lane.
in two frames were matched by their distances or discarded if A summary of every track including the vehicle dimensions,
no feasible match was found. By doing so, false positive vehicle class, driving direction and the mean speed is given by
detections could be completely removed. If a vehicle was not the second file. Detailed information like speeds,
detected in few consecutive frames due to e.g. occlusion by a accelerations, lane positions and a description of surrounding
traffic sign, the movements were predicted until a new vehicles in every frame are stored for each track in the last file.
detection matched the vehicle’s track.
V. DATASET STATISTICS AND EVALUATION
Additional postprocessing was applied to retrieve smooth
positions, speeds, and accelerations in both x- and y-direction. A. General and Size Comparison of the Datasets
Using Rauch-Tung-Striebel (RTS) Smoothing [28] and a Table I gives a comparison of the amounts of data available
constant acceleration model, the trajectory of each vehicle was in the NGSIM and the highD dataset. While NGSIM provides
refined taking into account all detections. This improved the data of a recording duration of about 90 minutes at two
positioning error to a pixel size level. different sites (45 minutes each), highD includes data of more
E. Maneuver Classification than 16.5 hours of recordings, which were collected at six
different sites. In between the recordings, the battery of the
In addition to the raw vehicle trajectories, we have
drone was exchanged, and the drone was landed/started by the
extracted a set of predefined maneuvers for each vehicle to
pilot. While highD includes typical German highways with
ease work with the dataset, e.g. for analysis. As to our
two or three driving lanes per direction, the NGSIM recording
knowledge there is no established list of maneuvers on
sites are highways with five or six driving lanes per direction.
highways, we use our own list of maneuvers. Each maneuver
is detected by a predefined set of rules and thresholds. The Comparing the number of recorded vehicles, highD
maneuvers are not mutually exclusive excepting free driving contains nearly twelve times as many vehicles as NGSIM.
and car following. We use the definitions of [29] to decide While both datasets contain a negligible amount of
whether a vehicle is influenced by the preceding vehicle or not motorcycles (as most recordings for highD took place during
winter), the ratio between cars and trucks differs. Only 3 % of
the vehicles are trucks in NGSIM. This makes it very strongly
focused on cars compared to highD with a share of 23 %.
While the highD dataset contains a traveled distance that is
nine times as high, the total travel time of all vehicles is only
almost three times as long, because a lot of dense traffic occurs
in the NGSIM dataset.
B. Variety of Included Data
The highD dataset does not only include more data than
NGSIM, but the data have also a higher variety. The main
reasons for this are the higher number of recordings and the
inclusion of different times of the day and more recording
sites. As the histogram of mean track speeds in Fig. 6a shows,
Figure 5. Locations of recordings included in highD. Highways near highD offers a much broader range of mean speeds. The peaks
Cologne were selected by typical traffic density and number of lanes. at 80 km/h and 120 km/h are typical speeds for trucks and cars
IEEE 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, Hawaii, USA, November 2018

TABLE I. COMPARISON OF DATA AMOUNTS IN NGSIM AND HIGHD

Dataset
Attribute
NGSIM highD
Recording Duration [hours] 1.5 16.5

Lanes (per direction) 5-6 2-3

Recorded Distance [m] 500-640 400-420

Vehicles 9206 110 000

Cars 8860 90 000

Trucks 278 20 000

Driven distance [km] 5071 45 000

Driven time [h] 174 447 Figure 6. Histogram of a) mean track speeds b) truck ratio over time in
NGSIM and highD

at the recording sites. Despite an imposed speed limit of lateral positions. Thus, the original dataset should not be used
105 km/h at the NGSIM recording sites, tracks with a mean without preprocessing and [26] released an updated version
speed above 75 km/h are completely missing. The without infeasible tracks and smoothed longitudinal
composition of vehicle types measured by the truck ratio over trajectories. But [24] states, that many errors caused by the
time varies from 0 % to more than 50% in highD, while it stays tracking system cannot be fixed by filtering alone. Instead, the
below 10 % over time in the NGSIM dataset (see Fig. 6b). tracks need to be re-extracted using better algorithms from the
non-public original recordings.
C. Quality Evaluation and Comparison
Thus, the highD dataset has several advantages due to the
The initial training set of the semantic segmentation neural use of a single high-resolution camera, a frame rate more than
network used for the detection consists of about 3000 image twice as high and a state-of-the-art detection system. In
patches. The patches include vehicles extracted from ten contrast to NGSIM, no further post processing of the tracks is
recordings of different locations with varying light conditions. needed, since multiple post processing steps remove all false
Augmentation including flipping, adding Gaussian noise and positive detections and smooth the extracted trajectories.
changing the contrast increased the size of the dataset to
12 000 vehicles. The detection thresholds are chosen in favor D. Maneuver Statistics
of a low false negative rate to detect most of the vehicles. Finally, we analyze the occurrences of lane changes and
Corner cases like unique looking vehicles in the dataset were critical maneuvers defined in Section IV. The highD dataset
identified by a strongly changing detected bounding box size
in adjacent frames. Afterwards, they were labeled and added
to the training set for a second training iteration. Testing on a
validation set of images, the trained model detected about
99 % of the vehicles while keeping the false positive detection
at 2 %. The resulting mean positional errors of the vehicle
midpoint in longitudinal and lateral directions are below 3 cm
each in comparison to the manually created labels. The
tracking algorithm in the next step removes all false positive
detections by simple consistency checks and predicts vehicle
locations if vehicles were not correctly detected e.g. due to
occlusion.
In comparison to that, an algorithm tracking vehicle fronts
was used for the creation of NGSIM. In Fig. 7 the resulting
quality for the original NGSIM dataset and highD can be
compared. The bounding boxes of NGSIM dataset rarely
match the vehicle shapes and several outliers almost
exclusively contain the road surface. This matches the analysis
in [24, 26], stating that the original results contain numerous
errors. These especially occur at transitions between cameras
recording different segments of the sites and are caused by the
image stitching. Also, due to the necessary rectification and
unavoidable occlusions of the tracked vehicle fronts, the tracks
have a varying quality. Consequently, unrealistic speeds and
accelerations often occur. Also, parallel moving vehicles are Figure 7. a) NGSIM: Bounding boxes rarely match vehicle shapes. Some
sometimes assigned to the same lane resulting in false positive bounding boxes almost exclusively contain the road surface (marked red).
collisions instead of overtaking maneuvers due to errors in the b) highD: Bounding boxes completely match vehicle shapes.
IEEE 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, Hawaii, USA, November 2018

includes more than 11 000 lane changes, of which only 5600 describe a trajectory the best, an optimization problem is
were completely performed in the observed area. This is two formulated and solved.
times as much as NGSIM includes, while having a lower rate
of 0.10 vs 0.45 lane changes per vehicle. One reason for this is B. Description of Lane Change Surroundings
that the lower average traffic density and the smaller number The surrounding vehicles of a lane-changing vehicle
of lanes result in fewer lane changes. Also, critical maneuvers induce and influence the lane change and thus are included into
occur in the highD dataset. Analyses show that these are the statistics. We choose the preceding vehicle on the initial
mainly caused by tailgating and risky lane change maneuvers. lane and the directly preceding and tailing vehicles on the new
lane as the most relevant surrounding vehicles during a lane
VI. ANALYSIS OF EXTRACTED LANE CHANGES change. The extracted parameters include the minimal DHW,
THW, TTC and the gap size (see Fig. 8b). As shown in [4],
As an example of how the highD dataset can be used for a these parameters allow an analysis of the inducing conditions
system-level validation of highly automated driving systems, and an assessment of the criticality of the performed lane
an analysis of extracted lane change maneuvers was change.
performed. The maneuvers and the surrounding vehicles were
parametrized, and statistics were calculated. The frequency C. Statistics
distribution of the parameters and parameter combinations can As an example of relevant statistics for the validation of
be used as an indication of what kind of lane changes occur highly automated driving systems, we analyze lane changes
under what circumstances. These are necessary statistics for from the perspective of the tailing vehicle on the new lane.
the efficient selection and weighting of test scenarios in This vehicle is assumed to be automated and perceives the lane
simulation or on test tracks. change as a cut-in to which it may have to react. From the 5600
A. Lane Change Trajectory Model parameterized lane changes in highD we extracted 850 cut-in
scenarios from the right-hand side. For these, we show the
Lane changes are typically modeled using sine curves, distribution of the THW at that moment the vehicle enters the
splines or polynomials [31]. For simplicity, we use a lane and the linear dependency on ego vehicle speed in Fig. 9.
symmetrical model using two separate polynomials for the
longitudinal and lateral movement. While choosing a
VII. CONCLUSION
quadratic polynomial for the longitudinal movements, a
polynomial of degree five is used for the lateral movement, as We propose a new method for collecting data for safety
the lateral movement is more relevant for lane changes. It is validation of highly automated driving systems and present
assumed that the vehicle changing the lane has neither a lateral highD, a new dataset of naturalistic vehicle trajectories on
or longitudinal acceleration nor a lateral speed at the beginning German highways. Using drone captured video data and
and the end of the lane change. Thus, a lane change has five computer vision algorithms, we automatically extract more
remaining degrees of freedom for which we selected intuitive than 45 000 km of naturalistic driving behavior from 16.5 h of
parameters, which are shown in Fig. 8a. These parameters video recordings. After post processing the vehicle
include the lateral distance to the crossed lane marking and the trajectories, a set of four maneuvers and traffic statistics are
longitudinal speed at the beginning/end of the lane change. extracted from the tracks. We demonstrate that highD is
The fifth parameter is the duration of the lane change. appropriate as a data source for safety validation as typical
maneuvers and inter- and intra-maneuver probabilities can be
After detecting the lane changes by lane crossings as extracted. We will publish the dataset upon the release of our
described in Section IV, the lateral movement determines the paper. Our plan is to increase the size of the dataset and
beginning and the end of the lane change maneuver. To enhance it by additional detected maneuvers for the use in
identify the values for the parameters of the model that safety validation of highly automated driving.

Figure 8. a) Lateral polynomial and extracted parameters of a lane change Figure 9. a) Distribution of the tailing vehicle’s THW to a lane-changing
b) Overview over extracted parameters of surrounding vehicles during a vehicle at the time it enters the tailing vehicle’s lane
lane change. The blue vehicle symblizes the ego vehicle. b) Dependency of this THW on the tailing vehicle’s speed. The dashed line
shows the median, while the shades indicate deciles.
IEEE 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, Hawaii, USA, November 2018

REFERENCES [22] M. Cordts et al., “The Cityscapes Dataset for Semantic Urban Scene
Understanding,” in 29th IEEE Conference on Computer Vision and
[1] A. Pütz, A. Zlocki, J. Küfen, J. Bock, and L. Eckstein, “Database Pattern Recognition (CVPR), Las Vegas, 2016, pp. 3213–3223.
Approach for the Sign-Off Process of Highly Automated Vehicles,” [23] Y. Rahmati and A. Talebpour, “Towards a collaborative connected,
in 25th International Technical Conference on the Enhanced Safety of automated driving environment: A game theory based decision
Vehicles (ESV) National Highway Traffic Safety Administration, framework for unprotected left turn maneuvers,” in 28th IEEE
2017. Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 2017,
[2] W. Wachenfeld, P. Junietz, H. Winner, P. Themann, and A. Pütz, pp. 1316–1321.
“Safety Assurance Based on an Objective Identification of [24] B. Coifman and L. Li, “A critical evaluation of the Next Generation
Scenarios,” San Francisco, CA, USA, 2016. Simulation (NGSIM) vehicle trajectory dataset,” Transportation
[3] ENABLE-S3, About the project. [Online] Available: Research Part B: Methodological, vol. 105, pp. 362–377, 2017.
https://www.enable-s3.eu/about-project/. Accessed on: Mar. 21 2018. [25] C. Thiemann, M. Treiber, and A. Kesting, “Estimating Acceleration
[4] C. Roesener et al., “A Comprehensive Evaluation Approach for and Lane-Changing Dynamics Based on NGSIM Trajectory Data,”
Highly Automated Driving,” in 25th International Technical Transportation Research Record: Journal of the Transportation
Conference on the Enhanced Safety of Vehicles (ESV) National Research Board, vol. 2088, pp. 90–101, 2008.
Highway Traffic Safety Administration, 2017. [26] M. Montanino and V. Punzo, “Trajectory data reconstruction and
[5] G. Bagschick, T. Menzel, and M. Maurer, “Ontology based Scene simulation-based validation against macroscopic traffic patterns,”
Creation for the Development of Automated Vehicles,” in 29th Transportation Research Part B: Methodological, vol. 80, pp. 82–
IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 2018. 106, 2015.
[6] F. Köster, “Automatisiert und vernetzt im Testfeld Niedersachsen,” [27] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional
DLR magazin, no. 155, http://elib.dlr.de/117281/1/DLRmagazin-155- Networks for Biomedical Image Segmentation,” in Lecture Notes in
DE.pdf, 2017. Computer Science, Medical Image Computing and Computer-Assisted
[7] A. Puri, “A survey of unmanned aerial vehicles (UAV) for traffic Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells,
surveillance,” University of South Florida, Florida, 2005. and A. F. Frangi, Eds., Cham: Springer International Publishing,
[8] A. Puri, K. P. Valavanis, and M. Kontitsis, “Statistical profile 2015, pp. 234–241.
generation for traffic monitoring using real-time UAV based video [28] H. Rauch, C. Striebel, and F. Tung, “Maximum likelihood estimates
data,” in Mediterranean Conference on Control & Automation, of linear dynamic systems,” AIAA Journal, vol. 3, no. 8, pp. 1445–
Athens, 2007. 1450, 1965.
[9] M. A. Khan, W. Ectors, T. Bellemans, D. Janssens, and G. Wets, [29] R. Wiedemann, “Simulation des Straßenverkehrsflusses,” Karlsruhe,
“UAV-Based Traffic Analysis: A Universal Guiding Framework Germany, 1974.
Based on Literature Survey,” Transportation Research Procedia, vol. [30] M. Benmimoun, F. Fahrenkrog, A. Zlocki, and L. Eckstein, “Incident
22, pp. 541–550, 2017. Detection Based on Vehicle CAN-Data within the Large Scale
[10] K. Kanistras, G. Martins, M. J. Rutherford, and K. P. Valavanis, Field Operational Test "euroFOT",” in 22nd International Technical
“Survey of unmanned aerial vehicles (UAVs) for traffic monitoring,” Conference on the Enhanced Safety of Vehicles (ESV), Washington,
in Handbook of unmanned aerial vehicles: Springer, 2015, pp. 2643– DC, USA, 2011.
2666. [31] W. Yao, H. Zhao, F. Davoine, and H. Zha, “Learning lane change
[11] M. A. Khan, W. Ectors, T. Bellemans, D. Janssens, and G. Wets, “ trajectories from on-road driving data,” in IEEE Intelligent Vehicles
Unmanned Aerial Vehicle‐Based Traffic Analysis: Methodological Symposium (IV), Alcal de Henares Madrid, Spain, 2012, pp. 885–890.
Framework for Automated Multivehicle Trajectory Extraction,”
Transportation Research Record: Journal of the Transportation
Research Board, no. 2626, pp. 25–33, 2017.
[12] B. Coifman, M. McCord, R. G. Mishalani, M. Iswalt, and Y. Ji,
“Roadway traffic monitoring from an unmanned aerial vehicle,” in
IEE Proceedings-Intelligent Transport Systems, 2006, pp. 11–20.
[13] A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese, “Learning
social etiquette: Human trajectory understanding in crowded scenes,”
in European conference on computer vision, 2016, pp. 549–565.
[14] R. Eenink, Y. Barnard, M. Baumann, X. Augros, and F. Utesch,
“UDRIVE: the European naturalistic driving study,” in Proceedings
of Transport Research Arena, 2014.
[15] K. L. Campbell, “The SHRP 2 naturalistic driving study: Addressing
driver performance and behavior in traffic safety,” TR News, no. 282,
2012.
[16] V. L. Neale, T. A. Dingus, S. G. Klauer, J. Sudweeks, and M.
Goodman, “An overview of the 100-car naturalistic study and
findings,” National Highway Traffic Safety Administration, 2005.
[17] C. Kessler et al., “SP1 D11.3 Final Report,” in vol. 3, EuroFOT
Deliverable, 2012.
[18] J. Colyar and J. Halkias, NGSIM - US Highway 101 Dataset. [Online]
Available:
https://www.fhwa.dot.gov/publications/research/operations/07030/07
030.pdf. Accessed on: Mar. 21 2018.
[19] J. Halkias and J. Colyar, NGSIM - Interstate 80 Freeway Dataset.
[Online] Available:
https://www.fhwa.dot.gov/publications/research/operations/06137/06
137.pdf. Accessed on: Mar. 21 2018.
[20] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics:
The KITTI dataset,” The International Journal of Robotics Research,
vol. 32, no. 11, pp. 1231–1237, 2013.
[21] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous
driving? the kitti vision benchmark suite,” in IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Providence, RI,
USA, 2012, pp. 3354–3361.

You might also like