Mind The Gaps. Logical English, Prolog, and Multi-Agent Systems For Autonomous Vehicles
Mind The Gaps. Logical English, Prolog, and Multi-Agent Systems For Autonomous Vehicles
Galileo Sartor
Department of Computer Science, Swansea University, United Kingdom
[email protected]
Adam Wyner
Department of Computer Science, Swansea University, United Kingdom
[email protected]
Giuseppe Contissa
Department of Legal Studies, University of Bologna, Italy
[email protected]
In this paper, we present a modular system for representing and reasoning with legal aspects of traffic
rules for autonomous vehicles. We focus on a subset of the United Kingdom’s Highway Code (HC)
related to junctions. As human drivers and automated vehicles (AVs) will interact on the roads,
especially in urban environments, we claim that an accessible, unitary, high-level computational
model should exist and be applicable to both users. Autonomous vehicles introduce a shift in liability
that should not bring disadvantages or increased burden on human drivers. We develop a system “in
silico” of the model. The proposed system is built of three main components: a natural language
interface, using Logical English, which encodes the rules; an internal representation of the rules in
Prolog; and an multi-agent-based simulation environment, built in NetLogo. The three components
interact: Logical English is translated into and out of Prolog (along with some support code); Prolog
and NetLogo interface via predicates. Such a modular approach enables the different components to
carry different “burdens” in the overall system; it also allows swapping of modules. Given NetLogo,
we can visualize the effect of the modeled rules as well as validate the system with a simple dynamic
running scenario. Designated agents monitor the behaviour of the vehicles for compliance and record
potential violations where they occur. The information on potential violations is then utilized by
Validators, to determine whether the violation is punishable, differentiating between exceptions and
cases.
1 Introduction
Research in autonomous vehicles is improving at a continuous pace. In a possible future, AVs and human
agents will share a common space, such as city streets, where the AVs will have to interact with other
road users in a predictible and understandable way.
Among the problems that need to be addressed specifically for this shared scenario is that of making
the behaviour of AVs conform to the traffic laws [14], in such a way as to not increase the burden on the
other users of the roads.
For this reason we propose and present the development of rules from the UK Highway Code (traffic
rules) that are modelled for both human and autonomous drivers.
The hypothesis we put forward is that with a combination of logic modelling and natural language we
can obtain a representation of norms that can be directly used by both humans and autonomous agents,
thus simplifying the inclusion of AVs on mixed roads.
P. Cabalar, F. Fabiano, M. Gebser, G. Gupta and Th. Swift (Eds.): © Sartor, Wyner, and Contissa
40th International Conference on Logic Programming (ICLP 2024) This work is licensed under the
EPTCS 416, 2025, pp. 111–124, doi:10.4204/EPTCS.416.9 Creative Commons Attribution License.
112 Mind the Gaps
The system we present does not make use of machine learning (e.g., image recognition) as input to
the rule base. Rather we assume that the readings of the vehicle sensors are correct and fed into the rule
base, so that the vehicle can reason on the input and determine which action to take accordingly. In future
work, we might consider an integration between a rule-base and machine learning.
The system is designed to simulate multiple agents on the road, both human and autonomous and to
detect violations in their behaviour. The violations are recorded by specific agents (monitors). The logs,
with the information coming from the offending vehicle, can be used in a second moment to perform
legal reasoning on the violation and assign penalties when necessary.
The paper presents the background, outlining the main goals and issues in 2, with particular attention
on the issue of liability in 3 and the existing projects/papers that explore the same or a similar space
outlined in 4. The modular structure of the proposed system is then presented in 5, followed by the
methodology for the design of the different component in 6, with examples of the code. The legal issues
analyzed in the paper are further explored in the context of the proposed system in 7, specifically looking
at how such a system could reason on the occurring violations, keeping in mind what the monitors logged,
and also the reasoning the vehicles made at the time of the violation. This section will also look at what it
means to violate a rule in the context of driving, what types of violations could occur, and their differing
legal outcomes. We summarise and discuss future work in Section 8.
2 Background
Autonomous vehicles are going to share the road with human drivers, and other non autonomous agents.
It is therefore crucial to ensure the behaviour of AVs is consistent with that of a good driver, i.e. a human
driver who follows the rules in a predictable manner.
This is even more important if we consider that, if well developed, these vehicles could be aware of
their decision making, and could provide understandable explanations of the reasoning behind a certain
action. As we will cover in the following sections, we consider this to be a crucial point, to reduce the
burden on human agents who interact with the vehicle, and to reason with violations and reparations.
In developing the system, we should also consider the fact that as humans we make decisions also
based on the possibility of incurring in violations and fines. There may be multiple reasons for this, and
some may be legally valid, such as in the case of rules with exceptions, as will be described later. In the
case of AVs however the general behavioural rules have to be determined at build time, given that there
is no human agent involved in taking decision while driving. This question is made more complicated
by the issue of liability.
3 Liability
While autonomous vehicles are expected to drive more consistently with respect to the law and reduce
accidents, violations of the law and accidents may still happen in certain situations. The question that
arises is who should be held responsible for such violations and accidents - the driver, manufacturer, or
the algorithm developer? Such issues are relevant for the general context of development of autonomous
vehicles and guide how a system might be modeled. We develop issues below and specifically tie the
concept of “lawful reasonable agent” to our implementation.
At the highest levels of automation, that is levels 4 and 5, according to the SAE definition [16], the
autonomous vehicle takes full control of all dynamic driving tasks. In these two levels of automation,
the user is not expected to intervene when the automated driving system is on. Therefore, user’s liability
Sartor, Wyner, and Contissa 113
is to be considered only when the ODD limits are exceeded (namely, the limits within which the driving
automation system is designed to operate), or when users request the system to disengage the automated
driving system. 1
It should be noted that vehicles under level 4 or 5 may even be designed to be exclusively operated
by technology, that is without user interfaces such as braking, accelerating, steering, etc. Therefore, such
categories of vehicles do not contemplate at all the role of the human driver. Whenever user interfaces
and controls are missing, the user will not be able to intervene in any of the dynamic driving tasks, and
therefore cannot be considered in any way responsible for driving and subject to the related liabilities
[7].
Therefore, under levels 4 and 5, the burden of liability is mostly on manufacturers and/or program-
mers. They would be liable (1) when providing a defective or non-standard compliant tool that had a role
in the causation of the accident, and (2) whenever the system fails to carry out the assigned task with a
level of performance that is (at least) comparable to that reached by a human adopting due care under the
same conditions.
This is linked to the idea that there would be a reasonable expectation that the autonomous vehicle
performs an assigned task in a way that ensures the same level of safety that would be expected from a
human performing the same task. Thus, it would seem appropriate to compare the autonomous vehicle’s
behaviour in carrying out an assigned task, with the behaviour that would be otherwise expected by the
human driver [13].
In this perspective, the concept of negligence would be central in evaluating the behaviour of the
autonomous vehicle and assessing liabilities for manufacturers and programmers.
Negligence is ‘the omission to do something which a reasonable man, guided upon those consid-
erations which ordinarily regulate human affairs, would do, or doing something which a prudent and
reasonable man would not do’ (Blythe v Birmingham Waterworks (1856) 11 Exch 781, at p 784).
The tort of negligence usually requires the following elements: the existence of an injury, harm or
damage; that the injurer owes a duty of care to the victim; that the injurer has broken this duty of care
(fault); that the damage (or injury) is a reasonably relevant consequence of the injurer’s carelessness
(causal connection)[5]. In the legal discourse, “negligence” denotes carelessness, neglect, or inattention,
which are mental stances that can be ascribed only to human minds.
A driver of the vehicle has an asymmetrical duty of care toward pedestrians or other individuals in the
vicinity. The rules of 170-183 in part characterise how to meet this duty of care; broadly speaking, the
driver should proceed defensively and cautiously, anticipating behaviours of others which might create
circumstances in which the likelihood of an accident increases.
Clearly, the concept of negligence is linked to the idea of a human fault. In contrast, liability for tech-
nological failures is usually evaluated and allocated on the grounds of product liability, which requires
evidence of the following: an injury, harm or damage; a defective technology; and a causal connection
between the damage (or injury) and the defect, namely that the former must be a reasonably relevant con-
sequence of the latter. A technology may be considered defective if there is evidence of a design defect,
a manufacturing defect, or a warning defect. Design defect, where the design is unreasonably unsafe,
is the most relevant, and is usually determined by courts taking into account one of the following tests:
the state of the art, the evidence of alternative design, or the reasonable expectations of users/consumers
with regard to the function of the technology[17]. Key for our purposes is that negligence and product
1 This is consistent with the 2022 update to the Highway code, stating that While a self-driving vehicle is driving itself in a
valid situation, you are not responsible for how it drives. You may turn your attention away from the road and you may also
view content through the vehicle’s built-in infotainment apparatus, if available.
114 Mind the Gaps
5 Structure
The system presented is split in different, mostly independent modules, that each deal with one of the
requirements and interact through minimal translation layers.2
The controlled natural language (CNL) module is written in Logical English [12], syntactic sugar
on Prolog, that enables to write rules and interact with the system in natural language. Using Logical
English we can represent logic rules in natural language, that can be directly queried with a Prolog
interpreter, or translated to Prolog for use by the autonomous vehicle.
The Logic rules module is written in Prolog, and is mostly derived from the Logical English represen-
tation. The autonomous vehicle can reason with a Prolog interpreter, and use the result in determining
its driving behaviour. The Prolog output can be logged or converted back in natural language, saved in
a human readable format, and can be used to check instances after the fact (scenarios and queries). This
could be used in case of accidents or violations to determine why the vehicle took certain actions.
The simulation module uses NetLogo, a multi-agent programmable modeling environment, where
vehicles with different properties are spawned, ad can move around on a predefined road grid. In addition
to the basic movement, the vehicles can query the LE/Prolog rulebase to determine whether they are
allowed to perform a certain action, or conversely, if they are prohibited.
In the following section the division of labour between the different components that was chosen for
the system will be made clear.
6 Methodology
The development of the system started with the representation of norms in Logical English [Cite Mind
the Gap]. Given the need for a simulation system, and the availability of different potential candidates,
the idea was to keep the system modular, with the possibility to swap different components. The current
simulation uses NetLogo, but there is limited overlap in the components, mainly what is needed to
convert data and I/O. The rules themselves can still be queried by LE/Prolog, and combined with other
models. The NetLogo simulation is derived from one of the examples made available in NetLogo3 , and
is then expanded through the use of a bridge to Prolog4 that had previously been developed, and has
been updated for the purpose of this project. Vehicles in NetLogo are assigned different properties, and
2 The full code of the system is available at https://github.com/LyzardKing/mind-the-gap/tree/ICLP2024
3 https://ccl.northwestern.edu/netlogo/models/TrafficIntersection
4 https://github.com/LyzardKing/NetProLogo
116 Mind the Gaps
are spawned at one of the edges of the screen. They follow the road, and decide whether to turn at an
intersection randomly. At the moment the system is not responsible for route planning, although in the
future it might be added. There are multiple types of vehicles, each with particular properties. Some
of the properties are reflected in Prolog, while others are limited to the NetLogo environment. Most
properties and data come from sensors in the vehicle, that perceive the surrounding environment, as
well as the properties inherent to the vehicle itself. The vehicles can see their surroundings, avoid other
objects/agents, without accessing the legal norms. If we disable the Prolog section the vehicles can still
drive, and act more like a swarm (Cite). This behaviour may be very efficient on roads only used by
autonomous vehicles, where the vehicles can communicate, and organize their actions accordingly. This
is not the case on mixed roads, with both human and autonomous agents, pedestrians, bikes, emergency
vehicles, and other potentially unpredictable agents. Human drivers, while sharing the road with AVs,
will have to be able to trust and understand the actions of the surrounding vehicles, while not necessarily
knowing (or caring) if they are human or autonomous. In these circumstances, AVs will have to respect
the same rules as their human counterparts, even if the actions are less optimized.
all the steps that have been used to derive a certain conclusion (trace). We are interested however in a
more dynamic use of the rulebase, and for that we can rely on the previously mentioned agent simulation.
Environment The simulated environment is very simple, consisting of three roads with two intersec-
tions. one of the intersections has a traffic light, while the other has stop signs. The intersections are
spawned with their specific properties, and agents are generated independently starting from random
road sections.
6.2.1 Agents
In the simulation there are different agents, with different properties and goals:
Vehicles Vehicles are divided in two categories: cars, and emergency vehicles (ambulances). This is
because rules may apply differently to different vehicle types. At the moment in the simulation vehicles
are divided in two categories: autonomous and human; as well as two types: cars and ambulances.
The different types of vehicles can be expanded, and share certain rules. In particular the main
difference between the ambulances and cars is that the ambulances can violate certain rules of the HC,
like crossing with a red light, so long as that doesn’t directly cause an accident. To check this the vehicle
uses the information about its surroundings to determine whether there is another vehicle close enough.
The cars are split in human and machine driven, with the main difference for now is the introduction
of a number of delays and variations in the human behaviour. For example, a human driver may decide
to go faster than the speed limit.
118 Mind the Gaps
Pedestrians In the current simulation the pedestrian have a very basic behaviour, simply crossing when
they get to a road. The only thing pedestrians will look at is if there is a car immediately approaching.
This behaviour will be expanded upon in future development.
Monitors Monitors are the final agents that are active in the simulation. They focus on one section of
road to see if they detect any vehicles which may have violated a traffic rule. The monitors have a narrow
scope of vision and only access visible properties in the environment, e.g., cameras that recognize the
license plate, speed and position of the vehicles; they only react with respect to that information. As
such, monitors are purely reactive, rather than interactive. Moreover, they do not do any legal reasoning
per se, which is why we only say they identify whether a vehicle may have violated a traffic rule a vehicle
with its scope of coverage.
The rules that pertain to the monitors are modelled as in Listing 2. In the simulation, the monitor
has vision of the traffic light, the vehicle position, and speed. When a vehicle enters the cone of vision
of the monitor, the monitor gathers information about the traffic light and the vehicle speed; the monitor
can detect whether the vehicle is moving or not, passing the predicate “vehicle is stopped” and the traffic
light colour to Prolog rules. The Prolog rules used by the monitors then determine if a potential violation
occurred, i.e., if the vehicle is not stopped and the light is red.
As discussed later, information on potential violations is passed to a validator, which may have
additional information about the properties of a vehicle which can be taken (or not) to mitigate against
issuance of a reparation. We say that if there is a potential violation and no mitigating circumstances,
then there is a punishable violation, which leads to a reparation.
Sartor, Wyner, and Contissa 119
7.1 Design
Figure 2 is a graphic outline of the flow of information and reasoning. We start with Vehicle Scenario
which is the state of the world within the scope of vision of the vehicle; it is the context in which the
vehicle would execute an action (Vehicle Action). The Monitor is a reactive agent which is in charge of
detecting a violation within its scope of vision which is the Monitor Scenario; they stand-in for cameras
or the police. As a reactive agent, they record a Potential Violation, which remains to be validated with
respect to the laws as indicated below. The Validator Scenario is a hypothetical state of the world, one
in which the Vehicle Scenario has been modified were the goal of the Vehicle Action to be attained. The
Validator Scenario is used by the Validator to scope consideration of the Lawful Actions, which are those
actions which are compliant with the laws in that Validator Scenario; in effect, we are given all those
actions which, were they executed in the given Validator Scenario would be lawful. The Validator is
triggered by an instance of a Potential Violation; it is used to evaluate whether the Potential Violation
is indeed illegal or whether there might be mitigating circumstances. To move to this next step (VA in
LA wrt PV), we consider whether the action that the vehicle executes (Vehicle Action) is amongst the
Lawful Actions relative to the relevant Potential Violation, that is, whether the action has been caught by
120 Mind the Gaps
in read
Monitor Scenario Monitor Potential Violation (PV)
triggers
Yes VA in LA
Mitigated Violation
wrt PV
No
Punishable Violation
a monitor for a possible legal violation. If it is, then the violation is a Mitigated Violation; if not, then it
is a Punishable Violation, which could require a penalty payment.
For example, in Figures 3 and 4, we have vehicles which enter the intersection against the red light.
This introduces a Potential Violation in that whether the vehicle is penalised depends on whether or not it
has mitigating circumstances relative to the law. An ordinary vehicle would raise a Punishable Violation
in Figure 3, from which we would infer a correlated reparation (not shown). However, an ambulance
would raise a Mitigated Violation, as in Figure 4, since as an ambulance it has a legitimate reason not to
abide by the rule; consequently, no reparation can be inferred.
7.2 Implementation
Here we outline the implementation for each component of the design.
Scenario and Vehicle Action Two possible Scenarios are in Listing 3 and 4, which contain the goal of
entering the junction. The vehicle would execute an action (Vehicle Action), applying the rules in Listing
1 to the Scenario, which provides rules for each of a car and an ambulance. Note that an ambulance does
not need to abide by red lights, while a normal vehicle does.
1 scenario : 1 scenario :
2 173 is of type car . 2 253 is of type ambulance .
3 173 has red light . 3 253 has red light .
4 goal : 4 goal :
5 173 can enter the junction 5 253 can enter the junction
. .
Listing 3: “Normal vehicle Scenario” Listing 4: “Ambulance Scenario”
Monitor The monitor can detect a violation, as discussed in relation to Listing 2, and records it.
Validator Scenario is a hypothetical state of the world, one in which the Vehicle Scenario has possibly
been modified where the goal would be realised by the Vehicle Action. In this instance, since the Potential
Violation is related to the goal that the vehicles had in the Scenario and Vehicle Action above, the
Validator Scenario is equivalent to the Vehicle Scenario. These need not be equivalent; for example, if
the vehicle were caught speeding, though its goal were entering the junction.
Validator The Validator uses the rules in Listing 5 with the Rules of the Road of Listing 1 to determine
the possible lawful actions, and compare them to the action which gives rise to the Potential Violation.
Comparing the Vehicle Actions, Legal Actions, and Potential Violations The Listing 5 uses infor-
mation about the recorded Potential Violation and whether the vehicle can execute the action (Listing 1).
Where the vehicle can cross the red light and it is an ambulance, there are mitigating circumstances, so a
violation is mitigated; where the vehicle cannot cross the red light, the violation is punishable.
Acknowledgements
The authors wish to thank Prof. Bob Kowalski, for his active leadership of the Logical English project,
and for his encouragement to explore its applications to many different domains. We also thank Prof.
Giovanni Sartor for supporting this work in the context of the H2020 ERC Project “CompuLaw” (G.A.
833647).
References
[1] Trevor Bench-Capon, Michał Araszkiewicz, Kevin Ashley, Katie Atkinson, Floris Bex, Filipe Borges,
Daniele Bourcier, Paul Bourgine, Jack G. Conrad, Enrico Francesconi, Thomas F. Gordon, Guido Gov-
ernatori, Jochen L. Leidner, David D. Lewis, Ronald P. Loui, L. Thorne McCarty, Henry Prakken, Frank
Schilder, Erich Schweighofer, Paul Thompson, Alex Tyrrell, Bart Verheij, Douglas N. Walton & Adam Z.
Wyner (2012): A history of AI and Law in 50 papers: 25 years of the international conference on AI and
Law. Artificial Intelligence and Law 20(3), pp. 215–319, doi:10.1007/s10506-012-9131-x.
Sartor, Wyner, and Contissa 123
[2] Trevor Bench-Capon & Pepijn R. S. Visser (1997): Open Texture and Ontologies in Legal Information Sys-
tems. In Roland R. Wagner, editor: Eighth International Workshop on Database and Expert Systems Appli-
cations, DEXA, IEEE Computer Society, pp. 192–197, doi:10.1109/DEXA.1997.617268.
[3] Hanif Bhuiyan, Guido Governatori, Andy Bond & Andry Rakotonirainy (2024): Traffic rules compliance
checking of automated vehicle maneuvers. Artif. Intell. Law 32(1), pp. 1–56, doi:10.1007/S10506-022-
09340-9.
[4] Brian H. Bix (2012): Defeasibility and Open Texture. In Jordi Ferrer Beltrán & Giovanni Battista
Ratti, editors: The Logic of Legal Requirements: Essays on Defeasibility, Oxford University Press,
doi:10.1093/acprof:oso/9780199661640.003.0011.
[5] Gert Brüggemeier (2006): Common principles of tort law: a pre-statement of law. British Institute of Inter-
national and Comparative Law.
[6] Joe Collenette, Louise A. Dennis & Michael Fisher (2022): Advising Autonomous Cars about the Rules of
the Road. In Matt Luckcuck & Marie Farrell, editors: Proceedings Fourth International Workshop on Formal
Methods for Autonomous Systems (FMAS), 371, pp. 62–76, doi:10.4204/EPTCS.371.5.
[7] Giuseppe Contissa, Francesca Lagioia & Giovanni Sartor (2018): Liability and automation: legal issues in
autonomous cars. Network Industries Quarterly 20(2), pp. 21–26.
[8] Guido Governatori, Trevor Bench-Capon, Bart Verheij, Michal Araszkiewicz, Enrico Fran-cesconi &
Matthias Grabmair (2022): Thirty Years of Artificial Intelligence and Law: The First Decade. Artificial
Intelligence and Law 30(4), pp. 481–519, doi:10.1007/s10506-022-09329-4.
[9] Ahmad Hammoud, Azzam Mourad, Hadi Otrok & Zbigniew Dziong (2022): Data-Driven Federated Au-
tonomous Driving. In Irfan Awan, Muhammad Younas & Aneta Poniszewska-Marańda, editors: Mobile Web
and Intelligent Information Systems, Lecture Notes in Computer Science, Springer International Publishing,
Cham, pp. 79–90, doi:10.1007/978-3-031-14391-5.
[10] Patrick Irvine, Antonio A. Bruto Da Costa, Xizhe Zhang, Siddartha Khastgir & Paul Jennings (2023):
Structured Natural Language for Expressing Rules of the Road for Automated Driving Systems. In: 2023
IEEE Intelligent Vehicles Symposium (IV), pp. 1–8, doi:10.1109/IV55152.2023.10186664. Available at
https://ieeexplore.ieee.org/document/10186664.
[11] Suraj Kothawade, Vinaya Khandelwal, Kinjal Basu, Huaduo Wang & Gopal Gupta (2021): AUTO-DISCERN:
Autonomous Driving Using Common Sense Reasoning. In Joaquin Arias, Fabio Aurelio D’Asaro, Abeer
Dyoub, Gopal Gupta, Markus Hecher, Emily LeBlanc, Rafael Peñaloza, Elmer Salazar, Ari Saptawijaya,
Felix Weitkämper & Jessica Zangari, editors: International Conference on Logic Programming 2021 Work-
shops, CEUR Workshop Proceedings 2970, CEUR. Available at https://ceur-ws.org/Vol-2970/
#gdepaper7.
[12] Robert Kowalski, Jacinto Dávila & Miguel Calejo (2021): Logical English for legal applications. Confer-
ence: XAIF, Virtual Workshop on XAI in Finance.
[13] Francesco Paolo Patti et al. (2019): Autonomous vehicles’ liability: need for change? In: Digital revolution-
new challenges for law: data protection, artificial intelligence, smart products, blockchain technology and
virtual currencies, CH Beck, pp. 190–213.
[14] Henry Prakken (2017): On the problem of making autonomous vehicles conform to traffic law. Artif. Intell.
Law 25(3), pp. 341–363, doi:10.1007/s10506-017-9210-0.
[15] Albert Rizaldi, Jonas Keinholz, Monika Huber, Jochen Feldle, Fabian Immler, Matthias Althoff, Eric Hilgen-
dorf & Tobias Nipkow (2017): Formalising and Monitoring Traffic Rules for Autonomous Vehicles in Is-
abelle/HOL. In Nadia Polikarpova & Steve Schneider, editors: Integrated Formal Methods, Cham, pp. 50–66,
doi:10.1007/978-3-319-66845-1.
[16] SAE On-Road Automated Vehicle Standards Committee (2021): Taxonomy and definitions for terms related
to driving automation systems for on-road motor vehicles. SAE international.
[17] Hanna Schebesta (2017): Risk Regulation Through Liability Allocation: Transnational Product Liability and
the Role of Certification. Air and Space Law 42(2), pp. 107–136, doi:10.54648/AILA2017011.
124 Mind the Gaps
[18] Jiawei Wang, Yang Zheng, Qing Xu & Keqiang Li (2022): Data-Driven Predictive Control for Connected
and Autonomous Vehicles in Mixed Traffic. In: 2022 American Control Conference (ACC), pp. 4739–4745,
doi:10.23919/ACC53348.2022.9867378.