ResearchBrief March Boch Hohma Trauth FINAL V2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/361846678

Towards an Accountability Framework for AI: Ethical and Legal Considerations

Preprint · February 2022


DOI: 10.13140/RG.2.2.10231.50086

CITATIONS READS

0 867

3 authors:

Auxane Boch Ellen Hohma


Technische Universität München Technische Universität München
18 PUBLICATIONS 88 CITATIONS 14 PUBLICATIONS 93 CITATIONS

SEE PROFILE SEE PROFILE

Rainer Trauth
Technische Universität München
17 PUBLICATIONS 141 CITATIONS

SEE PROFILE

All content following this page was uploaded by Auxane Boch on 08 July 2022.

The user has requested enhancement of the downloaded file.


Technical University of Munich
Munich Center for Technology in Society
Institute for Ethics in Artificial Intelligence

Research Brief – February 2022

Towards an Accountability Framework for AI:


Ethical and Legal Considerations
By Auxane Boch, Ellen Hohma, Rainer Trauth

With the growth of what were once smaller AI applications into highly
complex systems, the issue of who is responsible for the predictions or
decisions made by these systems has become pressing. Using the
example of autonomous driving, this Brief highlights major
accountability problems for AI systems from a legal and ethical
perspective. Implications for accountability, including explainability and
responsibility requirements, can already be found in current guidelines.
However, the transfer and application of these guidelines to the specific
context of AI systems, as well as their comprehensiveness, requires
further efforts, particularly in terms of societal demands. Therefore,
further multidisciplinary investigations are required to strengthen the
development of holistic and applicable accountability frameworks.
Institute for Ethics in Artificial Intelligence Technical University of Munich

Application areas of artificial intelligence (AI) have grown rapidly in recent years. Patent registrations
and AI-enabled inventions are increasing and are being adopted by industry as AI applications
promise higher performance (Zhang, 2021). Small use cases are growing into larger and more
complex systems that directly impact people's lives. Unlike previous technological advances, far-
reaching decisions can be made without direct human intervention. This brings to light a new
concern regarding how such systems can be made accountable, a major feature of which is
improving the general understandability, or explainability, of such technologies. Explainability of
these systems can help define and delineate accountability, and with that responsibility, more
clearly.
Accountability is defined as “the fact of being responsible for what you do and able to give a satisfactory
reason for it” (Cambridge Dictionary, 2022). Consequently, accountability consists of two components: (1)
responsibility, defined as “something that it is your job or duty to deal with”, and (2) explanation, i.e., “the
details or reasons that someone gives to make something clear or easy to understand” (Cambridge
Dictionary, 2022). In particular for AI systems, realizing those two concepts is a difficult task. As AI systems,
such as neural networks, are often black box systems, i.e., the connection between input and output
parameters is opaque, an explanation of how the system derived a certain prediction or conclusion is not
always obvious. This problem may even intensify in the future, as explicability of algorithms decreases with
their increasing complexity. Additionally, missing explanation exacerbates the issue of responsibility, making
it hard to determine duties if the decision process or source of failure is not entirely clear.
In this Brief, we will investigate current challenges for accountability of AI systems from two perspectives;
legal and ethical. The challenges for accountability from a practical perspective will be introduced using the
example of autonomous vehicles. We will further examine how and which obligations to explainability arise
from currently existing legal frameworks, how product liability standards can be translated into the specific
use case of AI systems and the challenges that may arise. As an outlook, we will discuss the need for a
broader accountability approach, the requirements for a multi-level explainability, and the bigger picture of
social responsibility.

Autonomous Driving as a Use Case

Autonomous driving is arguably one of the most the technology to be widely adopted by customers
elaborate existing research areas. Many scientific and society (Amodei, 2016). In any case, to
disciplines and technologies are needed to realize ensure acceptable products, each subsystem of
the dream of a fully self-driving car. The the vehicle must be safeguarded in the process to
interaction of the vehicle in the real world is a non- avoid errors that can occur during the
deterministic process, so that action sequences of development phase and during operation.
the vehicle cannot be unambiguously determined.
The high complexity of the driving task requires a This comes into conflict with the finding that users
high level of experience so that even difficult want techniques that are directly interpretable,
driving situations can be handled. AI and Machine comprehensible, and trustworthy (Zhu, 2018). The
Learning (ML) promise to solve these types of demand for ethical AI is increasing and has
problems by using huge amounts of data. These become thematically more important in society
large amounts of recorded data can be considered (Goodman, 2017). Therefore, trade-offs must
as experiences of the machine, which correspond assess between the performance of a model and
to the past experiences of a brain. As with its transparency, or explainability, in order to meet
humans, decision-making based on historical societal, and not just technological, needs
experiences is not transparent or obvious to (Dosilovic, 2018). One way to develop transparent
external individuals. In the technological context, but powerful algorithms is to use Explainable AI
opacity increases with system complexity. (XAI) methods. “XAI will create a suite of machine
However, these challenges must be overcome for learning techniques that enables human users to

*This research is derived from the IEAI project „Towards an Accountability Framework for AI Systems: The Autonomous Vehicle Use Case“, which
is generously support by Fujitsu.

https://ieai.mcts.tum.de/ IEAI Research Brief 2


Institute for Ethics in Artificial Intelligence Technical University of Munich

Figure 1: SAE Levels of Autonomous Driving (SAE International, 2018)

understand, appropriately trust, and effectively explanation (Arrieta, 2020). The explanation in
manage the emerging generation of artificially Figure 2 of the vehicle could then be as follows:
intelligent partner” (Gunning, 2017). “The car heads down the street, because the
street is clear” (Zablocki, 2018).
The essential goals of XAI are therefore
understanding and trust. There are two technical XAI can contribute to minimizing and reporting
ways to achieve this: either the development of negative impacts in the field of AI from a technical
transparent models from scratch or the post-hoc point of view. XAI methods are intended to provide
explainability of ML models (Arrieta, 2020). In the same level of transparency in the long term as
transparent models, attention is paid to the other technical systems that operate without AI. In
requirements of the model already in the design addition to decision-making, XAI can also help
process. Examples of techniques in this area detect irregularities in the data. Unfair practices,
include linear/logistic regression, decision trees or such as discrimination against ethically
rule-based learning. Models from this domain are marginalized groups, can be reduced. With these
easy to understand, but limited in performance. technical capabilities, responsible AI should thus
Post-hoc explainability techniques, on the other be developed.
hand, examine AI models where explainability
was not considered during design process. Some Nevertheless, further discussions are needed to
of these algorithms analyze the black-box actually achieve implementable accountability.
environment of the ML model to obtain information Legal and ethical challenges, for instance, must
about the relationship between input and output be overcome to develop responsible AI for the
through perturbations. With the help of these future.
methods, the transparency of decisions can also
be increased in the field of autonomous driving. Legal Obligations to Accountability

Figure 2 shows an attention map created in the The definition introduced above outlines that
temporal context of autonomous driving using accountability consists of two components, being
camera recognition data (Kim, 2018). responsible for a decision or action, as well as
giving appropriate explanation for it. As both of
Important areas of the camera that are essential these concepts have already been brought up in
for decision-making can be visualized. To make other, digital or non-digital, contexts, obligations
the decision process understandable for the user, for AI systems can be examined with regard to
other methods can be used, such as a linguistic
https://ieai.mcts.tum.de/ IEAI Research Brief 3
Institute for Ethics in Artificial Intelligence Technical University of Munich

Figure2: Attention Map Generation (Zablocki, 2018, Kim, 2018)

existing legal guidelines. However, there are interests, at least the right to obtain human
major difficulties for translating common general intervention on the part of the controller, to
approaches towards regulating explainability and express his or her point of view and to contest the
responsibility to the specific context of AI, some of decision” (Art. 22(3), GDPR). A second anchor
which we will investigate below. systems can be can be found in Article 15 regulating the “right of
examined with regard to major difficulties for access by the data subject”, which grants the data
translating common general approaches towards subject a right to “meaningful information about
regulating explainability and responsibility to the the logic involved” (Art. 15(1)(h), GDPR).
specific context of AI, some of which we will
investigate below.

Autonomous vehicle systems are an


Obligations to Explainability ideal example of complex applications
Indicators for a right to explanation can be found
in many legal frameworks, such as contract and that could revolutionize their industry in
tort law or consumer protection law (Sovrano et the coming decades. However, there are
al., 2021). In particular regarding the use of already examples in which the system’s
personal data, the General Data Protection opacity has led to long legal disputes
Regulation (Regulation 2016/679; GDPR) can be
used as a first legal reference that regulates due to the lack of clarity in regard to
transparency obligations in the European Union liability issues (Griggs & Wakabayashi,
(EU). Specifically, as AI systems are highly 2018). In 2018, for example, an Uber test
dependent on the use of data and, in some cases, vehicle hit a pedestrian even though a
the processing of personal data cannot be entirely
avoided (e.g. recording high-resolution maps for safety driver was on board of the
autonomous driving including images of vehicle. In the end, the test driver was
pedestrians in the streets), GDPR has a strong held liable. However, this was not clear
impact on data governance in AI systems.
at the beginning.
If the GDPR is applicable, the data processor is
obliged to provide the data subject with certain
information on, for example, collection and use of
personal data. Aiming now at deriving a general However, major debates have emerged among
right to explanation for AI processes, essentially scholars concerning whether those articles
two main grounds can serve as a starting point in translate to a general right to explanation for AI
the GDPR (Ebers, 2020; Hacker & Passoth, processes (Bibal et al., 2021; Ebers, 2020;
2021). First, Article 22 regulates “automated Felzmann et al., 2019). First, Article 22 applies to
individual decision-making, including profiling” decisions “based solely on automated processing”
and, in certain cases, obliges the data controller (Art. 22(1), GDPR). Many AI-enabled systems still
to “implement suitable measures to safeguard the allow for human intervention. Therefore, they are
data subject's rights and freedoms and legitimate not covered by this obligation (Ebers, 2020).
https://ieai.mcts.tum.de/ IEAI Research Brief 4
Institute for Ethics in Artificial Intelligence Technical University of Munich

Further, it is controversial among legal scholars as outputs (Art. 13, AI Act). While the current
to whether Article 22 actually grants a right to proposal of the AI Act is already significantly more
explanations of individual decisions (Bibal et al., concrete and tightly adapted to the specific
2021; Ebers, 2020). Recital 71, providing circumstances of AI systems than more generic
additional interpretive guidance on Article 22, legal frameworks, there is still some criticism
mentions such a right. However, it has not been about its practical applicability. For instance,
explicitly incorporated in the binding section of the although the AI Act concretely elaborates on
GDPR (Ebers, 2020). Article 15 seems more transparency measures to be put in place, the
promising, as it directly refers to “meaningful degree of required transparency is still left vague.
information”. But it does not elaborate on the Instead, it refers to an appropriate level, still
appropriate level of detail of the provided allowing much room for variation and
information. Some argue whether all information interpretation.
is needed to explain individual decisions or only
the overall structure and functionalities of an AI- We see that legislation recognizes the issue of
based system needs to be disclosed (Ebers, transparency and its intensified necessity in the
2020; Hacker & Passoth, 2021). context of AI systems. Current legislation can be
adapted to the use of AI, as well as new directives
have been put in place. However, the concept of
explainability is targeted through transparency,
which refers to transparent algorithmic processes
Accountability consists of two rather than clearly traceable decision-making.
components, being responsible for a Explainability issues have not been clarified in
decision or action, as well as giving their entirety yet, as there are still unresolved
questions in particular on the level of
appropriate explanation for it.
interpretability in the concrete application
contexts.

Furthermore, a prevailing problem is the question


To obtain more clarity on the specific transparency of which rights are granted to whom and against
requirements for AI systems, a first outlook can be whom, i.e., who can demand which explanation
found in the proposal for a Regulation of the from whom. An example of this unresolved
European Parliament and of the council laying tension is the GDPR granting explanation rights to
down harmonized rules on Artificial Intelligence the data subject, who is, however, not necessarily
(Regulation 2021/0106, AI Act), also referred to as the system’s user. Exemplary is equally the AI Act,
AI Act. The European Commission proposes to which does not grant the user any claim for
take a risk-based approach and categorize AI reparation, only penalizes noncompliant behavior.
applications into three levels, (1) prohibited AI Therefore, although more concrete guidelines that
practices, (2) high-risk AI systems and (3) AI regulate the pressing matter of explanation and
systems of minimal risk. While prohibited AI transparency are already in place or initiated, the
practices bear an unacceptable risk and will issue has not yet been fully resolved and the main
therefore be banned from “placing on the market, task now is to reconcile the theoretical
putting into service or use” (Art. 5(1)(a), AI Act) in conceptions with practice.
the EU, certain transparency requirements are
imposed on high-risk or recommended for Implications of Liability
minimal-risk AI systems. For high-risk AI systems, The second component of accountability from a
the AI Act in its current implementation mainly legal perspective is understood as liability, i.e.,
demands transparency on data and data “the state of being legally responsible for
governance to prevent bias (Art. 10, AI Act), something” (Cambridge Dictionary, 2022).
technical documentation of the general Liabilities for businesses are, of course, manifold
algorithmic logic to demonstrate the compliance and can result from many different obligations and
with the AI Act requirements (Art. 11 & Annex VI, regulations, such as from just described data
AI Act), record keeping to allow for monitoring and abuse according to GDPR, or transparency duties
increase traceability (Art. 12, AI Act), as well as from the newly proposed AI Act. An interesting
further transparency obligations to allow users to research field for AI is liabilities due to system
interpret and appropriately use the system’s failure according to obligations derived from

https://ieai.mcts.tum.de/ IEAI Research Brief 5


Institute for Ethics in Artificial Intelligence Technical University of Munich

product liability directives. On the one hand, this is A second source of difficulties in applying the PLD
the major legal starting point when studying to AI systems is the question of who accounts for
responsibility distribution for a system error. On which component. This is less obvious for AI than
the other hand, it still needs to be investigated how for physical products. According to the directive,
AI systems fit into the directive’s prerequisites. the system manufacturer is responsible for the
Liability here refers to the “non-contractual, strict entire product and is jointly liable with the supplier
liability of the producer for damage caused by a of the defective component. In the case of
defective product” (Borges, 2021, p. 33). It is software in general and AI in particular, however,
therefore not a matter of faults, it instead depends it is more difficult to distinguish the individual
on certain preconditions, such as which interests components and therefore to derive concrete
are affected or how the damage is caused. The liabilities. An example is the distinction between
approximation of the laws, regulations and the algorithmic conception of an AI and its trained
administrative provisions of the Member States implementation. Resolving interpretations
concerning liability for defective products (Council suggest that an untrained AI model is seen as
Directive 85/374/EEC, The Product Liability basic material while the network fed with data is
Directive, PLD), which came into force in the EU considered a component equivalent (Borges,
in 1985, gives guidance on these prerequisites 2021). This highlights that a translation of current
and sets the standards for strict liability for a product liability standards to AI technologies is
product’s defects. Essentially, the producer of a possible, however, more clarity on the concrete
product and respectively the manufacturer of the adaptation in this context is needed.
defect component are liable for damage to one of
the rights protected under the directive. A
complete and exhaustive list of rights that are
protected is provided, mainly including death, The lack of transparency surrounding
personal injury and damage or destruction of the dataset, and its possible bias, and
property. Further, the damage must be caused by inner workings of the decision-making
the product, in particular, by the defect of the process of the systems building off of
product.
those data can result in the exclusion of
While this derivation sounds reasonable for most certain populations from the AI
tangible products, questions arise if the directive decision-making process.
is to be applied to AI systems. A first major
precondition is that the damage is caused by a
product, defined as “all movables […], even
though incorporated into another movable or into What this overview shows is that there already
an immovable” (Art. 1, Product Liability Directive exists a backbone for both responsibility and
(PLD)). In theory, it is therefore arguable if AI falls explainability obligations of accountability in
under the PLD, as it is not in line with this definition current legal frameworks. However, the transfer to
of movable objects and is usually provided as a the special use case of AI, for instance by drawing
service. In practice, however, software and, connections between standard products and AI
hence, AI created by software shall be treated like systems, is still ongoing. The explainability
a product and be subject to the same liability component of accountability is particularly
standards (Cabral, 2020). Worth mentioning is relevant as transparency (a necessary component
also the scope that the PLD sets, limiting liabilities of explainability) and responsibility are highly
to damage to the health or property of private intertwined. Explanation of a decision can show
users. It excludes claims of commercial users, as reasoning and, hence, shed light on accountability
well as mere financial loss due to the product of the engaged actors. Therefore, explainability
failure (Borges, 2021). Furthermore, other rights, becomes even more pertinent in legal
such as personality rights are currently not considerations, as it can prove or disprove the
covered by product liability rights (Borges, 2021). liability of parties involved.
This is particularly relevant for AI systems, as
damage caused by incorrect assessment due to,
for example, bias is not covered by product Ethical Challenges for Accountability
liability.

https://ieai.mcts.tum.de/ IEAI Research Brief 6


Institute for Ethics in Artificial Intelligence Technical University of Munich

While the law may lay down good first steps for an However, some main questions remain for a
accountability framework, an ethical framework proper application of the accountability concept in
goes beyond this and calls for more. Indeed, the context of AI – the identification of its different
accountability, when considered a sum of stakeholders and the way responsibility needs to
responsibility and explainability, does not concern be shared between them (Gevaert et al., 2021).
only the technical aspects of a tool, but also its
global impact on society. The Need for Explainability
As stated in the previous definitions, explainability
Accountability, on a higher level, can be defined in regard to a tool’s quality and use is needed to
as the relationship between an actor and the build a good accountability evaluation. Even if a
group (e.g. society) to which the actor holds an comprehensive accountability evaluation is
obligation to justify their conduct (Bovens, 2007). broadly researched on the technical side of an AI
application, to shed light on the inner workings of
different types of algorithms and the data used to
train them, more considerations need to be made
For the autonomous driving example, to reach an acceptable global level of
explainability is an inevitable first step explainability (Gevaert et al., 2021).
towards a holistic accountability
framework. Installing a data recorder or
‘black box’ similar to the ones used in
Some main questions remain for a
planes is often mentioned as a means to
proper application of the accountability
document inherent processes that led to
concept in the context of AI – the
a system failure. However, to record
identification of its different
more complex parameters than a car’s
stakeholders and the way responsibility
mere speed or geographic location that
needs to be shared between them.
can give further direction on the
responsibilities, knowledge about
currently hidden details of opaque
techniques is required. This The opacity of AI systems creates an imbalance
opaqueness and the lack of strong in society for most vulnerable groups, in particular,
due to the presence of bias towards said
experience in unifying them with recent communities in the dataset used to train the tool.
legal regulations highlight the need for The lack of transparency surrounding the dataset,
further investigations in this research and its possible bias, and inner workings of the
field. decision-making process of the systems building
off of those data can result in the exclusion of
certain populations from the AI decision-making
process (Barocas & Selbst, 2016). The
It is what allows critiques and praise regarding the identification of such bias prior to implementation,
performance of a stakeholder, and relates to their and resolution of the issues through methodical
active choice to give information regarding their approach would reduce this type of inequality.
behaviors (Bovens et al., 2014). Using this Thus, the use, evaluation of data and correction of
approach, the need for explainability of the AI- possible bias by AI systems producers need to be
powered tool implemented, and discussion able to be evaluated and punished or resolved in
relating to its use and impact, is quite clear. the court of public opinion.
Additionally, a judgment entailing formal or
informal positive or negative consequences can A major issue faced to reach a proper
be passed onto the actor’s choices, thus on the acknowledgment and evaluation of such situation
product proposed by said actor (Ackerman, 2005; is the dissonant explanations given to humans, as
Bovens, 2007; Olson, 2018). compared to how one typically constructs
explanations, impeding a good understanding of
the problem (Miller, 2019). Indeed,

https://ieai.mcts.tum.de/ IEAI Research Brief 7


Institute for Ethics in Artificial Intelligence Technical University of Munich

understandability differs from one individual to responsible for the way AI works” (Floridi & Cowls,
another depending on their personal context and 2019, p.8). AI is not only a technical problem, but
the aim of the explanation. This influences how a social one due to its possible impact on society.
appropriate and useful a given “why” and “how” Thus, the identification of responsible and
explanation is (Ribera & Lapedriza, 2019; Miller, accountable actors needs to be thoroughly
2019; Hoffman et al., 2018; The Alan Turing approached to consider the global frame of social
Institute, 2019). responsibility towards the groups impacted by the
AI tool. Indeed, such technologies can impact
communities’ life, safety and well-being (Saveliev
& Zhurenkov, 2020). As social responsibility can
be understood as the consequential need for a
In the case of autonomous driving, the stakeholder’s actions to benefit society. A balance
recognition of traffic signs can be taken must be struck between economic/technological
growth and well-being of the group. At this point,
for example. Indeed, the passenger must defining what is meant by “well-being of the group”
be given the opportunity to react to the differs by each culture and subculture accordingly.
uncertainty if it occurs. If a traffic sign is
intentionally manipulated, explainability
from the accountability perspective, the
developer perspective, and the user In the case of autonomous vehicles, if it
perspective helps to solve became the primary means of transport,
misunderstandings (Eykholt et al., social responsibility would translate as
2018). the reduction of 90% of all vehicles
crashes, saving lives, and billions of
dollars to societies (Bertoncello & Wee,
2015).
Gunning and Aha (2019), sum up three major
needs for a good explainable AI: (1) to produce
more explainable models, which would be
necessary for a technical and social assessment
of an AI product (2) to design better explanation Building off 47 ethical principles for a socially
interfaces to facilitate the interaction with the beneficial AI, Floridi and Cowls (2019) proposed a
knowledge required, and (3) to understand the five principles approach to AI ethical evaluation.
psychological requirements to deliver effective Starting off with four core principles borrowed from
explanations to humans, which will impact the bioethics – beneficence, non-maleficence,
opportunity for technically literate or not autonomy, and justice – the authors argued for the
individuals belonging to a given society to need of an additional one to support
participate in the evaluation of a tool. This last understandability and accountability; the principle
point is of main interest in regard to our of explainability (Saveliev & Zhurenkov, 2020).
accountability approach, mainly, the importance This last pillar enables other principles to be
of involving all actors of society’s opinion and evaluated, allowing for a social impact, and thus
discussion in the definition of responsibility and social responsibility consideration for each AI-tool
consequences. In other words, this final point is of proposed.
paramount importance for including communities
served by the AI-producing stakeholders in the Final Thoughts
accountability distribution (van den Homberg et
al., 2020). In this research brief, we highlighted the most
pressing questions regarding accountability of AI
More than Accountability, a Social products. Today's frameworks and regulations do
Responsibility not provide clear answers to the questions of how
More than its explainability requirement, we should deal with accountability problems. Even
accountability calls for an “ethical sense of who is if new legislation, such as the AI Act, is introduced,
https://ieai.mcts.tum.de/ IEAI Research Brief 8
Institute for Ethics in Artificial Intelligence Technical University of Munich

the specific application to business processes and


technologies is not yet clear.

The increasing use of artificial intelligence in


products also creates further accountability
dependencies. The more complex the systems
become, the greater the impact on people and
society.

Moreover, providing a clearer path to


understanding AI systems and stakeholders
involved in their creation, implementation, and use
is of paramount importance in the consideration of
social responsibility and accountability. A lot more
needs to be done in regard to accountability
definition whether technically or socially.

The introduction of autonomous driving


level 3 (SAE International, 2018) in
Germany can be cited as an example of
increase dependencies. In this case, the
driving task is completely left to the
system for the first time. A time
transition period is defined, which must
be granted to the driver to take back
control. Handing power over to the
machine and limiting human oversight
and control creates new questions of
how to deal with malfunctioning and
system errors. Therefore, a clear
definition and distribution of
responsibilities through frameworks
and regulation to mitigate potential risks
is inevitable.

https://ieai.mcts.tum.de/ IEAI Research Brief 9


Institute for Ethics in Artificial Intelligence Technical University of Munich

References

Ackerman, J. M. (2005). Human rights and social Cambridge Dictionary. (2022, February
accountability. Participation and Civic 23).Explanation.
Engagement, Social Development https://dictionary.cambridge.org/dictionary/en
Department, Environmentally and Socially glish/explanation
Sustainable Development Network, World Cambridge Dictionary. (2022, February 23).Liability.
Bank. https://dictionary.cambridge.org/dictionary/en
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., glish/liability
Schulman, J., & Mané, D. (2016). Concrete Cambridge Dictionary. (2022, February
problems in AI safety. arXiv preprint 23).Responsibility.
arXiv:1606.06565. https://dictionary.cambridge.org/dictionary/en
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., glish/responsibility
Bennetot, A., Tabik, S., Barbado, A., ... & Cognizant, & Piroumian, V. (2014, April). Why Risk
Herrera, F. (2020). Explainable Artificial Matters: Deriving Profit by Knowing the
Intelligence (XAI): Concepts, taxonomies, Unknown.
opportunities and challenges toward
Council Directive 85/374/EEC. The approximation of
responsible AI. Information fusion, 58, 82-115.
the laws, regulations and administrative
Barocas, S., & Selbst, A. D. (2016). Big data's
disparate impact. Calif. L. Rev., 104, 671. provisions of the Member States concerning
Bertoncello, M., & Wee, D. (2015, June). Ten ways liability for defective products. European
autonomous driving could redefine the Parliament, Council of the European Union.
automotive world. McKinsey & Company. https://eur-lex.europa.eu/legal-
Bibal, A., Lognoul, M., De Streel, A., & Frénay, B. content/EN/TXT/PDF/?uri=CELEX:31985L03
(2021). Legal requirements on explainability in 74&from=EN
machine learning. Artificial Intelligence and Došilović, F. K., Brčić, M., & Hlupić, N. (2018, May).
Law, 29(2), 149-169. Explainable artificial intelligence: A survey.
Borges, G. (2021, June). AI systems and product In 2018 41st International convention on
liability. In Proceedings of the Eighteenth information and communication technology,
International Conference on Artificial electronics and microelectronics (MIPRO) (pp.
Intelligence and Law (pp. 32-39). 0210-0215). IEEE.
Bovens, M. (2007). Analysing and assessing Ebers, M. (2020). Regulating Explainable AI in the
accountability: A conceptual framework
European Union. An Overview of the Current
1. European law journal, 13(4), 447-468.
Legal Framework (s). An Overview of the
Bovens, M., Goodin, R. E., & Schillemans, T. (Eds.).
Current Legal Framework (s)(August 9, 2021).
(2014). The Oxford handbook public
accountability. Oxford University Press. Liane Colonna/Stanley Greenstein (eds.),
Cabral, T. S. (2020). Liability and artificial intelligence Nordic Yearbook of Law and Informatics.
in the EU: Assessing the adequacy of the Eykholt, K., Evtimov, I., Fernandes, E., Li, B.,
current Product Liability Directive. Maastricht Rahmati, A., Xiao, C., ... & Song, D. (2018).
Journal of European and Comparative Robust physical-world attacks on deep
Law, 27(5), 615-635. learning visual classification. In Proceedings
Cambridge Dictionary. (2022, February of the IEEE conference on computer vision
23).Accountability. and pattern recognition (pp. 1625-1634).
https://dictionary.cambridge.org/dictionary/en
glish/accountability

https://ieai.mcts.tum.de/ IEAI Research Brief 10


Institute for Ethics in Artificial Intelligence Technical University of Munich

Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò- Griggs, T., & Wakabayashi, D. (2018, July 30). How a
Larrieux, A. (2019). Transparency you can Self-Driving Uber Killed a Pedestrian in
trust: Transparency requirements for artificial Arizona. The New York Times.
intelligence between legal norms and https://www.nytimes.com/interactive/2018/03/
contextual concerns. Big Data & Society, 6(1), 20/us/self-driving-uber-pedestrian-killed.html
2053951719860542. Olson, R. S. (2018). Establishing public
Floridi, L., & Cowls, J. (2021). A unified framework of accountability, speaking truth to power and
five principles for AI in society. In Ethics, inducing political will for disaster risk
Governance, and Policies in Artificial reduction:‘Gcho Rios+ 25’. In Environmental
Intelligence (pp. 5-17). Springer, Cham. Hazards (pp. 59-68). Routledge.
Gevaert, C. M., Carman, M., Rosman, B., Regulation 2016/679. General Data Protection
Georgiadou, Y., & Soden, R. (2021). Fairness Regulation. European Parliament, Council of
and accountability of AI in disaster risk the European Union. https://eur-
management: Opportunities and lex.europa.eu/legalcontent/EN/TXT/PDF/?uri
challenges. Patterns, 2(11), 100363. =CELEX:32016R0679
Goodman, B., & Flaxman, S. (2017). European Union Regulation 2021/0106. Regulation of the European
regulations on algorithmic decision-making Parliament and of the council laying down
and a “right to explanation”. AI harmonized rules on Artificial Intelligence.
magazine, 38(3), 50-57. European Parliament, Council of the
Gunning, D., & Aha, D. (2019). DARPA’s explainable European Union. https://eur-
artificial intelligence (XAI) program. AI lex.europa.eu/legalcontent/EN/TXT/?uri=CEL
magazine, 40(2), 44-58. EX%3A52021PC0206
Gunning, D. (2017). Explainable artificial intelligence Ribeiro, M. T., Singh, S., & Guestrin, C. (2016,
(XAI)(2017). Seen on, 1.Tech. rep., Defense August). " Why should i trust you?" Explaining
Advanced Research Projects Agency the predictions of any classifier.
(DARPA) In Proceedings of the 22nd ACM SIGKDD
Hacker, P., & Passoth, J. H. (2021). Varieties of AI international conference on knowledge
Explanations Under the Law. From the GDPR discovery and data mining (pp. 1135-1144).
to the AIA, and Beyond. From the GDPR to the Ribera, M., & Lapedriza, A. (2019, March). Can we do
AIA, and Beyond (August 25, 2021). better explanations? A proposal of user-
Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. centered explainable AI. In IUI
(2018). Metrics for explainable AI: Challenges Workshops (Vol. 2327, p. 38).
and prospects. arXiv preprint Russell, S. J., & Norvig, P. (2016). Artificial
arXiv:1812.04608. intelligence: a modern approach. Malaysia.
Kim, J., Rohrbach, A, Darrell, T, Canny, JF, Akata, Z. Pearson Education Limited.
(2018). Textual explanations for self-driving SAE International (2018).Taxonomy and definitions
vehicles. ECCV for terms related to driving automation
Miller, T. (2019). Explanation in artificial intelligence:
systems for on-road motor vehicles (J3016).
Insights from the social sciences. Artificial
https://saemobilus.sae.org/content/J3016_20
intelligence, 267, 1-38.
1806

https://ieai.mcts.tum.de/ IEAI Research Brief 11


Institute for Ethics in Artificial Intelligence Technical University of Munich

Saveliev, A., & Zhurenkov, D. (2020). Artificial


intelligence and social responsibility: the case
of the artificial intelligence strategies in the
United States, Russia, and
China. Kybernetes.
Sovrano, F., Sapienza, S., Palmirani, M., & Vitali, F.
(2021). A Survey on Methods and Metrics for
the Assessment of Explainability under the
Proposed AI Act. arXiv preprint
arXiv:2110.11168.
The Alan Turing Institute. (2019). Explaning decisions
made with AI: Part 1: The basics of explaning
AI. Information Commissioner's Office (ICO).
https://ico.org.uk/for-organisations/guide-to-
data-protection/key-dp-themes/explaining-
decisions-made-with-artificial-
intelligence/part-1-the-basics-of-explaining-ai/
van den Homberg, M. J., Gevaert, C. M., &
Georgiadou, Y. (2020). The changing face of
accountability in humanitarianism: Using
artificial intelligence for anticipatory
action. Politics and Governance, 8(4), 456-
467.
West, D. M. (2018). The future of work: Robots, AI,
and automation. Brookings Institution Press.
Zablocki, É., Ben-Younes, H., Pérez, P., & Cord, M.
(2021). Explainability of vision-based
autonomous driving systems: Review and
challenges. arXiv preprint arXiv:2101.05307.
Zhang, D., Mishra, S., Brynjolfsson, E., Etchemendy,
J., Ganguli, D., Grosz, B., ... & Perrault, R.
(2021). The ai index 2021 annual report. arXiv
preprint arXiv:2103.06312.
Zhu, J., Liapis, A., Risi, S., Bidarra, R., & Youngblood,
G. M. (2018, August). Explainable AI for
designers: A human-centered perspective on
mixed-initiative co-creation. In 2018 IEEE
Conference on Computational Intelligence and
Games (CIG) (pp. 1-8). IEEE.

https://ieai.mcts.tum.de/ IEAI Research Brief 12

View publication stats

You might also like