Scenario Generation For Autonomous Vehicles With Deep-Learning-Based Heterogeneous Driver Models Implementation and Verification
Scenario Generation For Autonomous Vehicles With Deep-Learning-Based Heterogeneous Driver Models Implementation and Verification
Article
Scenario Generation for Autonomous Vehicles with
Deep-Learning-Based Heterogeneous Driver Models:
Implementation and Verification
Li Gao 1,2 , Rui Zhou 3,4, * and Kai Zhang 1,2, *
Abstract: Virtual testing requires hazardous scenarios to effectively test autonomous vehicles (AVs).
Existing studies have obtained rarer events by sampling methods in a fixed scenario space. In reality,
heterogeneous drivers behave differently when facing the same situation. To generate more realistic
and efficient scenarios, we propose a two-stage heterogeneous driver model to change the number
of dangerous scenarios in the scenario space. We trained the driver model using the HighD dataset,
and generated scenarios through simulation. Simulations were conducted in 20 experimental groups
with heterogeneous driver models and 5 control groups with the original driver model. The results
show that, by adjusting the number and position of aggressive drivers, the percentage of dangerous
scenarios was significantly higher compared to that of models not accounting for driver heterogeneity.
To further verify the effectiveness of our method, we evaluated two driving strategies: car-following
and cut-in scenarios. The results verify the effectiveness of our approach. Cumulatively, the results
indicate that our approach could accelerate the testing of AVs.
boundaries [11]: finding high-risk scenarios [12], boundary scenarios [13], and collision
scenarios [14].
However, the sampling space for those acceleration approaches is fixed because all
environmental vehicles utilize the same model and cannot be adjusted. Aggressive drivers
are more likely to behave in a manner that endangers others [15–17]. Therefore, we could
consider environmental vehicles variables, and by controlling these variables, we could
adjust the proportion of hazardous situations in the scenario space. Ge et al. tried to describe
driving behavior with utility functions [18]. Different drivers’ driving strategies can be
represented by different utility functions. Modifying the driving strategies of surrounding
vehicles (SVs) can result in more challenging events for the AV. A utility function defines the
selection of a particular behavior, such as the likelihood of lane changing or the following
distance [19]. A control model is also needed to calculate the speed of SVs.
Designing a driver model that simulates human behavior for environmental vehicles
is necessary to satisfy the uncertainty of human behavior. Imitation learning is a common
approach, but it requires the manual definition of cost functions and is computationally
expensive [20]. Aksjonov et al. used an artificial neural network (ANN) to predict human
behavior and achieved good performance [21]. We could equate driver modeling to the
trajectory prediction problem. Yang et al. used a Gaussian mixture model to identify the
driving style, and proposed a personalized joint time series modeling method for trajectory
prediction [22]. Such methods are deterministic predictions that cannot handle the multiple
possibilities of human behavior. To explore uncertainty about future states, some methods
predicted multiple possible paths. Zhao et al. estimated the endpoint candidates with
high probability on the basis of the environmental context and generated trajectories [23].
Tian et al. proposed a joint learning architecture to incorporate the lane orientation, vehicle
interaction, and driving intention in multi-modal vehicle trajectory forecasting [24]. The
GAN-based method incorporates latent variables into network learning and optimizes
the generated trajectories [25]. However, these stochastic prediction methods do not
represent driver heterogeneity well. For scenario sampling, the prediction target is not the
driver’s optimal trajectory [26,27], but the probability distribution of the next action. Deo
et al. utilized the information of surrounding vehicles to predict multimodal trajectory
distributions [28]. According to this knowledge, we propose heterogeneous driver models
with integrated decision and control implemented with deep learning. Heterogeneity is
reflected in separately training the models with different driver data types. Scenarios
are generated through real-time interaction between SVs with our driver model and AVs.
Taking five common initialization scenarios as examples, we changed the model style of
the SVs to obtain more dangerous events. To demonstrate that our method works, we
evaluated the two driving strategies with the generated scenarios. Compared with the
above methods, our method could better accelerate evaluation. The overview of our work
is depicted in Figure 1.
The contributions of this paper are as follows:
(1) Uncertainty in driver behavior is learned using deep-learning methods for the
dynamic generation of stochastic scenarios.
(2) Driver heterogeneity is demonstrated to be able to generate more realistic and
complex scenarios, and in some cases, increase the proportion of critical scenarios.
(3) Autonomous vehicles with scenarios generated by our method were tested, and
the safety and efficiency of two driving strategies were evaluated.
The rest of this paper is structured as follows: Section 2 introduces our scenario
generation method. Section 3 analyzes our experimental results, and Section 4 shows
the conclusions.
Sensors 2023, 23, 4570 3 of 14
where Pθ (Y |mi , I ) denotes the probability of future endpoints. The parameter θ is obtained
via model learning. The coordinate system must always have self-location at time t − 1 as
its origin for the model to be valid at any location. If the original coordinate sequence is
(dt− p , ..., dt−1 ), the coordinate conversion calculation formula is defined as follows:
0
d t − i = d t − i − d t −1 (2)
2.2. Datasets
We trained the model using the public highD dataset [30] containing UAV data
recorded on German highways, including the trajectory information of more than 110,500 ve-
hicles sampled at 25 Hz. We down-sampled the trajectory data to 5 Hz to improve the
training speed. Each track’s data contain coordinates, speed, acceleration, and the sur-
rounding vehicle ID.
To train the heterogeneous driver models, we had to classify the dataset. Drivers could
be divided into three categories according to style: aggressive, normal, and conservative.
We used the k-means algorithm to cluster all drivers into these categories on the basis of
the mean, variance, and maximal values of velocity and acceleration. Figure 2 visualizes
some of the features of each cluster. Aggressive drivers perform more lane changes and a
wide range of longitudinal acceleration, while conservative drivers tend to maintain their
lane and change speed smoothly. Our models learn their properties separately.
context vectors. When decoding, the context vector is concatenated with the selected
maneuver, and a five-dimensional vector representing the parameters of the Gaussian
endpoint distribution is output. An endpoint is sampled in this distribution as the intention
feature, and it is input to the MLP together with the historical trajectory information to
generate the acceleration of the next frame, which can better reflect the randomness of
behavior. We experimentally confirmed that the stochasticity of action was reflected well.
The AM consists of a classic LSTM encoder–decoder [33] and a multilayer perceptron
(MLP) [34]. The encoder–decoder framework estimates the sampling space for the short-
term endpoint region. An MLP is a fully connected class of a feedforward artificial neural
network (ANN). The encoder is the same as that in MM. It can extract the displacement
information and relative position to the surrounding eight vehicles.
y − ŷ
lnll = 0.5 ∗ (log(max (var, eps)) + ). (4)
max (var, eps)
We used the mean squared error [36] for the sampled end point and next frame
acceleration as follows.
1
lmse = ∑(Yi , Ŷi )2 . (5)
n
Therefore, the objective function was defined as follows.
We used LSTMs with 128 units and an MLP with 3 hidden layers. Heterogeneous
models were obtained by training separately for each style. All models are trained using
Adam with a learning rate of 0.001. The models were implemented using pyTorch. As a
Sensors 2023, 23, 4570 6 of 14
where amax is the maximal acceleration, v is the ego car (EC) speed, ṽ is the EC’s desired
speed, β is the acceleration exponent, s is the relative distance between EC and front car
(FC), s̃ is the desired relative distance as defined in (8), s0 is the minimal gap at standstill, T
is the desired time headway, ∆v is the speed difference between EC and FC, and b is the
comfortable deceleration.
ȧ is the desired acceleration of the vehicle. In this equation, the second item in
parentheses measures the gap between the speed and desired speed to promote vehicle
acceleration, and the third item measures the gap between the actual distance and desired
distance to promote vehicle braking. The desired vehicle distance is defined as follows:
v∆v
s̃ = s0 + max 0, vT + √ (8)
2 amax b
1 (v + ρamax )2 v2
srss = vρ + amax ρ2 + − FC , (9)
2 2amin,brake 2amax
Sensors 2023, 23, 4570 7 of 14
where srss is the safety distance, v is the EC speed, v FC is the FC speed, ρ is the response
time, amin,brake is the minimal braking deceleration until stoppage. Table 2 is the parameter
setting of RSS.
v2 v2
sn = vρ + − FC . (10)
2abrake 2amax
The formula removes the unreasonable acceleration term during the reaction and
redefines the braking deceleration as follows:
v
abrake = amin,brake + ( amax − amin,brake ) (11)
vmax
To evaluate these two strategies, we embedded two safety distances into the IDM
model by substituting srss and sn for s̃.
where L is the length of the car, xFC (t) is the position of FC, xEC (t) is the position of EC,
vFC (t) is the velocity of FC, vFC (t) is the velocity of EC.
Sensors 2023, 23, 4570 8 of 14
TTC is aimed at emergency situations where the distance between vehicles is relatively
close and where there is a large speed difference, such as the sudden braking of the vehicle
in front, which is a dangerous and urgent situation.
To evaluate the driving strategy, we observed the following distance and safety indi-
cator changes during the test in the randomly selected car-following and cut-in scenarios.
It was more appropriate to use another safety indicator, time headway (THW), because
the safety distance grows [41]. THW is defined as the time difference between EC and FC
Sensors 2023, 23, 4570 9 of 14
passing the same place, and it was calculated by dividing the distance between the two
vehicles by EC speed.
| x (t) − xEC (t)|
THW(t) = FC (13)
vEC (t)
THW mainly alarms when the distance between vehicles is close, and can help drivers
in developing a standardized driving habit to maintain a distance between vehicles. We
defined it as a dangerous but not urgent situation. We defined the dangerous threshold as
TTC less than 5 s and THW as less than 2 s [42,43].
Figure 6 exemplifies some generated scenarios for a simple situation. The scenarios
had smooth curves and could change lanes at any possible moment. If the lane change is
not completed due to time constraints, the test time can be extended as needed. Sampling
as much as possible enables coverage-oriented test automation. Furthermore, we could
achieve accelerated evaluation in two ways. One is the manual control of dangerous
maneuvers. For example, it is dangerous to change lanes directly at the beginning, as
shown in Figure 6. In Algorithm 1, the initial lateral maneuver could be set as a lane change.
The other is to use special sampling methods such as importance sampling methods when
sampling endpoints. Combining the two approaches can achieve spatially oriented test
automation. Our method is able to generate realistic and plausible scenarios.
According to the experimental design, we tested an AV with the IDM model in 25
groups. Table 4 shows the percentages of challenging scenarios. Compared to the original
driver model, the heterogeneous driver models could change the number of challenge
scenarios in the scenario space. All SVs set to be aggressive bring more dangerous scenarios.
Some situations depict scenarios where conservative vehicles are in front of traffic, also
increasing the number of dangerous scenarios. The comparison in the column shows hat
the increase in vehicles is also one of the reasons for the increase in dangerous situations.
Sensors 2023, 23, 4570 10 of 14
The results indicate that we could generate more dangerous and complex scenarios by
adjusting the location and number of aggressive drivers.
Initial Scenario All SVs Aggressive All SVs Conservative Front SVs Aggressive Back SVs Aggressive Original Driver Model
1 7% 0% 0% 5% 0%
2 1% 0% 0% 5% 0%
3 11% 0% 1% 6% 1%
4 10% 1% 0% 13% 2%
5 9% 1% 0% 6% 2%
3.2. Verification
To demonstrate that the scenarios generated using our driver model are usable, the
scenarios are applied to evaluate driving strategies. Given a car-following scenario with
an initial THW of less than 2 s, the AV with the original IDM model was continuously
dangerous during the test time owing to the close following distance. Figure 7 shows that
both driving strategies converge THW from danger to safety in car-following scenarios, but
the convergence value of RSS is larger. This is attributable to the more reasonable safety
distance of the negotiation strategy.
As in Figure 8, if a vehicle suddenly cuts in, both strategies could respond in time and
brake at a safe distance. The convergence process of THW is similar to that of car following.
From the deceleration process, the slope of the speed curve indicates that the negotiation
strategy had a shorter deceleration time and smoother braking speed.
Sensors 2023, 23, 4570 11 of 14
Table 5 summarizes the value range of THW and safety distance. RSS can guarantee
absolute security, but negotiation policies have higher traffic efficiency. This shows that our
verification results are correct and proves the validity of our driver model and method.
4. Conclusions
In this paper, we proposed a scenario generation method considering driver hetero-
geneity. This method improves the number of challenging events in the scenario space
by changing the driver model style of the environmental vehicles. Our model quantifies
Sensors 2023, 23, 4570 12 of 14
different drivers’ preferences by learning the probability of their behavior. Simulations were
implemented in multiple initialization scenarios to demonstrate the role of heterogeneity.
The results show that adjusting the number and location of aggressive drivers could lead
to more dangerous scenarios and thus improve the efficiency of testing. Thus, the method
ensures realism and diversity. Then, we used our scenarios to evaluate conservative strat-
egy and negotiate strategy. The evaluation results show that the conservative strategy was
safer and that the negotiation strategy was more efficient, which verified the effectiveness
of our approach. The choice of driving strategy depends on the trade-off between safety
and efficiency. Cumulatively, our approach could accelerate the testing of AVs. In future
work, we could delineate more detailed driver styles or consider heterogeneity from other
perspectives. As driver models become more diverse, scenarios become more complex, and
danger increases, so our future work could consider these factors.
Author Contributions: Conceptualization, L.G. and R.Z.; methodology, L.G.; validation, L.G. and
R.Z.; investigation, L.G.; writing—original draft preparation, L.G.; writing—review and editing, L.G.,
R.Z. and K.Z. All authors have read and agreed to the published version of the manuscript.
Funding: This research was supported by the Key-Area Research and Development Program of
Guangdong Province (2020B0909050003), and the Science and Technology Innovation Committee of
Shenzhen (CJGJZD20200617102801005).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: the original contributions presented in the study are included in the
article, and further inquiries can be directed to the corresponding author.
Acknowledgments: The authors would like to thank the reviewers and editors for improving this
manuscript, the key-Area Research and Development Program of Guangdong Province (2020B0909050
003), and Science and Technology Innovation Committee of Shenzhen (CJGJZD20200617102801005).
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Liu, S.; Capretz, L.F. An analysis of testing scenarios for automated driving systems. In Proceedings of the 2021 IEEE International
Conference on Software Analysis, Evolution and Reengineering (SANER), Honolulu, HI, USA, 9–12 March 2021; pp. 622–629.
2. Li, L.; Huang, W.-L.; Liu, Y.; Zheng, N.-N.; Wang, F.-Y. Intelligence testing for autonomous vehicles: A new approach. IEEE Trans.
Intell. Veh. 2016, 1, 158–166. [CrossRef]
3. Li, L.; Wang, X.; Wang, K.; Lin, Y.; Xin, J.; Chen, L.; Xu, L.; Tian, B.; Ai, Y.; Wang, J.; et al. Parallel testing of vehicle intelligence via
virtual-real interaction. Sci. Robot. 2019, 4, eaaw4106. [CrossRef]
4. Ma, Y.; Sun, C.; Chen, J.; Cao, D.; Xiong, L. Verification and validation methods for decision-making and planning of automated
vehicles: A review. IEEE Trans. Intell. Veh. 2022, 7, 480–498. [CrossRef]
5. Wang, F.-Y.; Song, R.; Zhou, R.; Wang, X.; Chen, L.; Li, L.; Zeng, L.; Zhou, J.; Teng, S.; Zhu, X. Verification and validation of
intelligent vehicles: Objectives and efforts from china. IEEE Trans. Intell. Veh. 2022, 7, 164–169. [CrossRef]
6. Zhou, R.; Liu, Y.; Zhang, K.; Yang, O. Genetic algorithm-based challenging scenarios generation for autonomous vehicle testing.
IEEE J. Radio Freq. Identif. 2022, 6, 928–933. [CrossRef]
7. Li, L.; Zheng, N.; Wang, F.-Y. A theoretical foundation of intelligence testing and its application for intelligent vehicles. IEEE
Trans. Intell. Transp. Syst. 2020, 22, 6297–6306. [CrossRef]
8. Zhao, D.; Lam, H.; Peng, H.; Bao, S.; LeBlanc, D.J.; Nobukawa, K.; Pan, C.S. Accelerated evaluation of automated vehicles safety
in lane-change scenarios based on importance sampling techniques. IEEE Trans. Intell. Transp. Syst. 2016, 18, 595–607. [CrossRef]
[PubMed]
9. Huang, Z.; Lam, H.; LeBlanc, D.J.; Zhao, D. Accelerated evaluation of automated vehicles using piecewise mixture models. IEEE
Trans. Intell. Transp. Syst. 2017, 19, 2845–2855. [CrossRef]
10. Althoff, M.; Lutz, S. Automatic generation of safety-critical test scenarios for collision avoidance of road vehicles. In Proceedings
of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1326–1333.
11. Sun, J.; Zhang, H.; Zhou, H.; Yu, R.; Tian, Y. Scenario-based test automation for highly automated vehicles: A review and paving
the way for systematic safety assurance. IEEE Trans. Intell. Transp. Syst. 2021, 23, 14088–14103. [CrossRef]
12. Zhang, X.; Li, F.; Wu, X. Csg: Critical scenario generation from real traffic accidents. In Proceedings of the 2020 IEEE Intelligent
Vehicles Symposium (IV), Las Vegas, NV, USA, 9 October–13 November 2020; pp. 1330–1336.
Sensors 2023, 23, 4570 13 of 14
13. Tuncali, C.E.; Fainekos, G.; Ito, H.; Kapinski, J. Sim-atav: Simulation-based adversarial testing framework for autonomous
vehicles. In Proceedings of the 21st International Conference on Hybrid Systems: Computation and Control (Part of CPS Week),
Porto, Portugal, 11–13 April 2018; pp. 283–284.
14. Najm, W.G.; Toma, S.; Brewer, J. Depiction of Priority Light-Vehicle Pre-Crash Scenarios for Safety Applications Based on Vehicle-to-Vehicle
Communications; Tech Report; National Highway Traffic Safety Administration: Washington, DC, USA, 2013.
15. Antić, B.; Čabarkapa, M.; Čubranić-Dobrodolac, M.; Čičević, S. The Influence of Aggressive Driving Behavior and Impulsiveness
on Traffic Accidents. 2018. Available online: https://rosap.ntl.bts.gov/view/dot/36298 (accessed on 5 April 2023).
16. Bıçaksız, P.; Özkan, T. Impulsivity and driver behaviors, offences and accident involvement: A systematic review. Transp. Res.
Part Traffic Psychol. Behav. 2016, 38, 194–223. [CrossRef]
17. Berdoulat, E.; Vavassori, D.; Sastre, M.T.M. Driving anger, emotional and instrumental aggressiveness, and impulsiveness in the
prediction of aggressive and transgressive driving. Accid. Anal. Prev. 2013, 50, 758–767. [CrossRef]
18. Ge, J.; Xu, H.; Zhang, J.; Zhang, Y.; Yao, D.; Li, L. Heterogeneous driver modeling and corner scenarios sampling for automated
vehicles testing. J. Adv. Transp. 2022, 2022, 8655514. [CrossRef]
19. Arslan, G.; Marden, J.R.; Shamma, J.S. Autonomous vehicle-target assignment: A game-theoretical formulation. J. Dyn. Syst.
Meas. Control. Trans. ASME 2007, 129, 584–596. [CrossRef]
20. Bhattacharyya, R.; Wulfe, B.; Phillips, D.J.; Kuefler, A.; Morton, J.; Senanayake, R.; Kochenderfer, M.J. Modeling human driving
behavior through generative adversarial imitation learning. IEEE Trans. Intell. Transp. Syst. 2022, 24, 2874–2887. [CrossRef]
21. Aksjonov, A.; Nedoma, P.; Vodovozov, V.; Petlenkov, E.; Herrmann, M. A novel driver performance model based on machine
learning. IFAC-PapersOnLine 2018, 51, 267–272. [CrossRef]
22. Xing, Y.; Lv, C.; Cao, D. Personalized vehicle trajectory prediction based on joint time-series modeling for connected vehicles.
IEEE Trans. Veh. Technol. 2020, 69, 1341–1352. [CrossRef]
23. Zhao, H.; Gao, J.; Lan, T.; Sun, C.; Sapp, B.; Varadarajan, B.; Shen, Y.; Shen, Y.; Schmid, C.; Li, C.; et al. Tnt: Target-driven trajectory
prediction. In Proceedings of the 5th Annual Conference on Robot Learning, London, UK, 8–11 November 2021; pp. 895–904.
24. Tian, W.; Wang, S.; Wang, Z.; Wu, M.; Zhou, S.; Bi, X. Multi-modal vehicle trajectory prediction by collaborative learning of lane
orientation, vehicle interaction, and intention. Sensors 2022, 22, 4295. [CrossRef] [PubMed]
25. Gupta, A.; Johnson, J.; Fei-Fei, L.; Savarese, S.; Alahi, A. Social gan: Socially acceptable trajectories with generative adversarial
networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23
June 2018; pp. 2255–2264.
26. Fang, L.; Jiang, Q.; Shi, J.; Zhou, B. Tpnet: Trajectory proposal network for motion prediction. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6797–6806.
27. Zhang, Y.; Sun, H.; Zhou, J.; Pan, J.; Hu, J.; Miao, J. Optimal vehicle path planning using quadratic optimization for baidu
apollo open platform. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13
November 2020; pp. 978–984.
28. Deo, N.; Trivedi, M.M. Convolutional social pooling for vehicle trajectory prediction. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1468–1476.
29. Alahi, A.; Goel, K.; Ramanathan, V.; Robicquet, A.; Fei-Fei, L.; Savarese, S. Social lstm: Human trajectory prediction in crowded
spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June
2016; pp. 961–971.
30. Krajewski, R.; Bock, J.; Kloeker, L.; Eckstein, L. The highd dataset: A drone dataset of naturalistic vehicle trajectories on german
highways for validation of highly automated driving systems. In Proceedings of the 2018 21st International Conference on
Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2118–2125.
31. Bellmund, J.L.; Gärdenfors, P.; Moser, E.I.; Doeller, C.F. Navigating cognition: Spatial codes for human thinking. Science 2018, 362,
eaat6766. [CrossRef]
32. Deo, N.; Trivedi, M.M. Multi-modal trajectory prediction of surrounding vehicles with maneuver based lstms. In Proceedings of
the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1179–1184.
33. Malhotra, P.; Ramakrishnan, A.; Anand, G.; Vig, L.; Agarwal, P.; Shroff, G. Lstm-based encoder-decoder for multi-sensor anomaly
detection. arXiv 2016, arXiv:1607.00148.
34. Gardner, M.W.; Dorling, S. Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric
sciences. Atmos. Environ. 1998, 32, 2627–2636. [CrossRef]
35. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International
Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988.
36. Mehta, P.; Bukov, M.; Wang, C.-H.; Day, A.G.; Richardson, C.; Fisher, C.K.; Schwab, D.J. A high-bias, low-variance introduction to
machine learning for physicists. Phys. Rep. 2019, 810, 1–124. [CrossRef] [PubMed]
37. Treiber, M.; Hennecke, A.; Helbing, D. Congested traffic states in empirical observations and microscopic simulations. Phys. Rev.
E 2000, 62, 1805. [CrossRef] [PubMed]
38. Shalev-Shwartz, S.; Shammah, S.; Shashua, A. On a formal model of safe and scalable self-driving cars. arXiv 2017,
arXiv:1708.06374.
39. Zhao, C.; Li, L.; Pei, X.; Li, Z.; Wang, F.-Y.; Wu, X. A comparative study of state-of-the-art driving strategies for autonomous
vehicles. Accid. Prev. 2021, 150, 105937. [CrossRef] [PubMed]
Sensors 2023, 23, 4570 14 of 14
40. Minderhoud, M.M.; Bovy, P.H. Extended time-to-collision measures for road traffic safety assessment. Accid. Anal. Prev. 2001, 33,
89–97. [CrossRef]
41. Hayward, J.C. Near miss determination through use of a scale of danger. Highw. Res. Rec. 1972, 384, 24–34.
42. Borsos, A.; Farah, H.; Laureshyn, A.; Hagenzieker, M. Are collision and crossing course surrogate safety indicators transferable? a
probability based approach using extreme value theory. Accid. Anal. Prev. 2020, 143, 105517. [CrossRef]
43. Shoaeb, A.; El-Badawy, S.; Shawly, S.; Shahdah, U.E. Time headway distributions for two-lane two-way roads: case study from
dakahliya governorate, egypt. Innov. Infrastruct. Solut. 2021, 6, 1–18. [CrossRef]
44. Dendorfer, P.; Osep, A.; Leal-Taixé, L. Goal-gan: Multimodal trajectory prediction based on goal position estimation. In
Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.