The Dark Sides of Artificial Intelligence An Integrated AI Governance Framework For Public Administration
The Dark Sides of Artificial Intelligence An Integrated AI Governance Framework For Public Administration
The Dark Sides of Artificial Intelligence An Integrated AI Governance Framework For Public Administration
To cite this article: Bernd W. Wirtz, Jan C. Weyerer & Benjamin J. Sturm (2020) The
Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public
Administration, International Journal of Public Administration, 43:9, 818-829, DOI:
10.1080/01900692.2020.1749851
ABSTRACT KEYWORDS
As government and public administration lag behind the rapid development of AI in their efforts to Artificial intelligence;
provide adequate governance, they need respective concepts to keep pace with this dynamic progress. governance; regulation;
The literature provides few answers to the question of how government and public administration should framework; public
administration; regulation
respond to the great challenges associated with AI and use regulation to prevent harm. This study
theory; AI challenges
analyzes AI challenges and former AI regulation approaches. Based on this analysis and regulation theory,
an integrated AI governance framework is developed that compiles key aspects of AI governance and
provides a guide for the regulatory process of AI and its application. The article concludes with theoretical
implications and recommendations for public officers.
Introduction Experts like Elon Musk and Stephen Hawking have brought
“A robot may not injure a human being or, through inac- up the question, whether AI will always be beneficial or rather
tion, allow a human being to come to harm.” (Asimov, 1950, become a threat to humanity (Cuthbertson, 2018). While
p. 26). With this famous quote, Isaac Asimov was among the these questions have also recently become subject to AI
first to present a set of rules for living in a society with research, only few studies have elaborated on them.
intelligent machines. Today, this fiction from the past Accordingly, there is little knowledge about the challenges
becomes more and more reality, as scientists and engineers of AI associated with the public sector and no consensus
are working hard to create artificial intelligence (AI), which about how to deal with them in the future (Veale et al.,
is “the capability of a computer system to show human-like 2018; Wang & Siau, 2018). However, Scherer (2016) high-
intelligent behavior characterized by certain core competen- lights the need for a legal system assessing benefits and risks to
cies, including perception, understanding, action, and learn- find a way to regulate AI and respective research without
ing” (Wirtz et al., 2019, p. 599). interfering with its advancement. Boyd and Wilson (2017)
AI provides great opportunities for public administra- emphasize the need for local and international policies to
tion, including the automation of workflow processes, faster reduce social and personal risks caused by AI. Thus, there
information processing, improved service quality or are efforts to find global solutions to these challenges, but
increased working efficiency (Thierer et al., 2017; Zheng many governments and researchers struggle in formulating
et al., 2018). Due to these potential benefits, government a long-term perspective on how to regulate and interact with
and public administration increasingly acknowledge the the AI market in both the private and public sector (Cath
significance of AI for economic and social advancement et al., 2017; Scherer, 2016).
by applying AI to their administration and public infra- Against this background, this article first outlines the
structures, as well as by supporting AI research. current state of AI governance before giving an over-
Despite these great opportunities, still many challenges view of AI challenges and risks for public administra-
and risks are associated with implementing AI in public tion as well as previous AI governance or regulation
administration, constituting a darker side of AI. These chal- frameworks. Based on this analysis and regulation the-
lenges vary from issues of data privacy and security, to work- ory, an integrated AI governance framework is devel-
force replacement and ethical problems like the agency and oped that organizes the key aspects of AI governance
fairness of AI (Boyd & Wilson, 2017; Wang & Siau, 2018). and regulation, showing the complex interactions of AI
Prominent opinion leaders have recently intensified their challenges and their regulation in the context of public
efforts to raise awareness for these challenges and threats. administration. The last section discusses theoretical
CONTACT Bernd W. Wirtz [email protected] Chair for Information and Communication Management, German University of Administrative
Sciences Speyer, Freiherr-vom-Stein-Str. 2, Speyer 67346, Germany.
© 2020 Taylor & Francis Group, LLC
INTERNATIONAL JOURNAL OF PUBLIC ADMINISTRATION 819
and practical implications of the findings, revealing first layer covers technical aspects of AI technology, algo-
opportunities for future research and providing recom- rithms and data structures. This is the core of their model,
mendations for public officers. as AI systems are based on algorithms and are processing
data to make decisions or take actions. The authors
further propose principles of responsibility and explain-
Current state of AI governance
ability to secure fairness and non-discriminatory actions
As the number of AI applications is growing and the at this early stage of information processing. The second
technology increasingly permeates everyday life, the layer addresses ethical criteria and principles to be con-
question arises of how government and public sidered and used to design specific ethics for the use of AI.
administration should deal with the potential risks The third layer covers social and legal issues and demands
and challenges involved, which is currently heavily a regulatory framework, defining challenges and respon-
discussed in the media and scientific literature (Boyd sibilities for AI to generate norms and appropriate reg-
& Wilson, 2017; Smith, 2018). As governance greatly ulation and legislation in the long term. Although, the
affects the AI industry, the development of AI and its model gives a decent overview of the different domains
impact on society, Thierer et al. (2017) propose two that are in need of regulation, it fails to provide details on
different approaches for governing AI. The first how the different layers interact, how to put the regulatory
method uses restrictions, bans and prohibitions to process into practice and who should be responsible for
limit research on AI as well as its application in the proposed regulation or governance. The model also
any public or private environment to prevent poten- solely relies on the challenging aspects of AI without
tial harm from autonomous machines or AI algo- a concrete theoretical foundation.
rithms. While this precautionary principle could Another approach by Rahwan (2018) proposes
severely impede technological progress and deprive a societal contract to evaluate the behavior of AI technology
society of possible benefits, governance could take, via a ‘society-in-the-loop’, an extension of the human-in-
on the other side, a reactive approach of unrestricted the-loop approach. Human-in-the-loop describes an
innovation, preventing and regulating risks only approach to AI technology, where a human operator is
when they occur in reality. always supervising and managing the outputs and actions
Some governments are already taking action, provid- of the AI system. The purpose of the AI system is to make
ing financial support to fund new innovation and tech- recommendations, but the final decision on how to act is
nology and planning strategies on how to interact, always based on information that comes from the human
regulate and govern AI in the future (Ansip, 2017). operator who thus controls the actions of AI to achieve the
Likewise, private organizations such as the Institute of common goals of the stakeholders involved. According to
Electrical and Electronics Engineers (IEEE, 2017), the Rahwan (2018), the human-in-the-loop approach is not
Allianz Group (AGCS, 2018) or Microsoft (Smith, sufficient for regulating the future use of AI, as AI will be
2018) discuss the impact of AI on society and possible applied to broader areas with implications for the whole
ways of governance. society and requires a regulation approach that represents
In the literature, the topic of AI governance and reg- all stakeholders within society, even though their interests
ulation is widely unexplored. Scherer (2016) gives might interfere with each other. In his society-in-the-loop
a detailed analysis of AI challenges and the theoretical model, the society as a whole first has to resolve certain
role of the government in legal questions, proposing tradeoffs between different human values like privacy and
a legal system that involves the invention of an AI devel- safety and has to ensure that the benefits and costs of AI
opment act consisting of a predefined legal rule system technology are reasonably distributed among the different
and a newly created agency to control other organizations stakeholders. Government and industry are meant to work
and enforce these rules. While there are many frameworks together to provide regulations and standards and repre-
in the literature about the governance of IT or organiza- sent the goals and expectations of society based on human
tions in general, only few focus on AI. Most of these values, ethics and social norms. The results are then imple-
models concern either technical or structural aspects of mented into an AI algorithm and evaluated towards the
AI (Sirosh, 2017), focus on organizational implementa- defined goals. In this case, all parts of society collaborate to
tion (Bataller & Harris, 2016) or address the process of govern or regulate the goals and behaviors of AI technol-
implementing AI into a public organization (Zheng et al., ogy. As can be seen, the model of Rahwan (2018) considers
2018). Only two frameworks address the governance or a broader view of AI in society. It accounts for different
regulation of AI risks and challenges (Gasser & Almeida, stakeholders and conflicting interests that are to be resolved
2017; Rahwan, 2018). The model proposed by Gasser and within a collaborative effort to increase benefits to society.
Almeida (2017) consists of three hierarchical layers. The Although some limitations like the ability to quantify social
820 B. W. WIRTZ ET AL.
values are discussed, it lacks details on how to design and applications. Here, there are some challenges special to
implement AI regulation. While the model also provides AI that need to be addressed in the near future, includ-
some information about the actors in the societal loop, it ing the governance of autonomous intelligence systems,
neglects the responsibilities of government. responsibility and accountability for algorithms as well
Both models discuss the reasons for regulation as as privacy and data security.
well as the challenges and risks of AI only superficially Governance of autonomous intelligence systemad-
and provide little information and theoretical reasoning dresses the question of how to control autonomous
for the actual necessity of regulation. However, to systems in general. Since nowadays it is very difficult
develop a thorough integrated AI governance frame- to conceive automated decisions based on AI, the latter
work requires an understanding and examination of is often referred to as a ‘black box’ (Bleicher, 2017).
these risks and challenges on a deeper level to be able This black box may take unforeseeable actions and
to target specific aspects without interfering with or cause harm to humanity. For instance, if an autono-
slowing down technical innovations and progress with mous AI weapon system learned that it is necessary to
incomplete or too general policies (Thierer et al., 2017). prevent all threats to obtain security, it might also
attack civilians or even children classified as armed by
the opaque algorithm (Heyns, 2014). Situations can get
Overview of AI challenges in the literature even worse when the AI becomes autonomous enough
to pursue its own goals, even if this means harm to
To present a systematic overview of the current state of individuals or humanity (Lin et al., 2008). Examples
literature on AI challenges and risks as well as to define like this give rise to the questions of transparency and
areas in need of regulation, three main areas of public AI accountability for AI systems.
challenges are emphasized based on the recent AI chal- The challenge of responsibility and accountability is
lenges approach of Wirtz et al. (2019) (see Figure 1). an important concept for the process of governance
and regulation. It addresses the question of who is to
be held legally responsible for the actions and decisions
AI law and regulation
of AI algorithms. Although humans operate AI sys-
This area strongly focuses on the control of AI by tems, questions of legal responsibility and liability
means of mechanisms like laws, standards or norms arise. Due to the self-learning ability of AI algorithms,
that are already established for different technological the operators or developers cannot predict all actions
and results. Therefore, a careful assessment of the negative attitudes towards AI, government and industry
actors and a regulation for transparent and explainable can influence social acceptance with good governance
AI systems is necessary (Helbing et al., 2017; Wachter and standards to enforce its beneficial use (Scherer,
et al., 2017). 2016).
Privacy and safety deals with the challenge of pro- Human interaction with machines is a big challenge
tecting the human right for privacy and the necessary to society because it is already changing human beha-
steps to secure individual data from unauthorized vior. Meanwhile, it has become normal to use AI on an
external access. Many organizations employ AI tech- everyday basis, for example, googling for information,
nology to gather data without any notice or consent using navigation systems and buying goods via speak-
from affected citizens (Coles, 2018). For instance, when ing to an AI assistant like Alexa or Siri (Mills, 2018;
searching for a fast way to get home from work, Thierer et al., 2017). While these changes greatly con-
a navigation system has to access the current location tribute to the acceptance of AI systems, this develop-
of the user or the government uses AI services to ment leads to a problem of blurred borders between
monitor public spaces to prevent criminal activities humans and machines, where it may become impossi-
(Power, 2016). Without informed consent from the ble to distinguish between them. Advances like Google
affected individuals, these AI applications and services Duplex were highly criticized for being too realistic and
endanger their privacy. human without disclosing their identity as AI systems
(Bergen, 2018).
AI society
AI ethics
AI already shapes many areas of daily life and thus has
a strong impact on society and everyday social life. For Ethical challenges are widely discussed in the literature
instance, transportation, education, public safety and and are at the heart of the debate on how to govern and
surveillance are areas where citizens encounter AI tech- regulate AI technology in the future (Bostrom &
nology (Stone et al., 2016; Thierer et al., 2017). Many Yudkowsky, 2014; IEEE, 2017; Wirtz et al., 2019). Lin
are concerned with the subliminal automation of more et al. (2008, p. 25) formulate the problem as follows:
and more jobs and some people even fear the complete “there is no clear task specification for general moral
dependence on AI or perceive it as an existential threat behavior, nor is there a single answer to the question of
to humanity (McGinnis, 2010; Scherer, 2016). whose morality or what morality should be implemen-
Workforce transformation and substitution is an ted in AI”. Ethical behavior mostly depends on an
important topic for government, industry and society underlying value system. When AI systems interact in
as a whole (Wirtz et al., 2019). AI can reduce tedious a public environment and influence citizens, they are
and repetitive work and is able to save time for the user expected to respect ethical and social norms and to take
for more creative or difficult to automate tasks. responsibility of their actions (IEEE, 2017; Lin et al.,
Furthermore, the accuracy in data analysis has 2008).
improved to a point, where it is better than the AI rulemaking for humans can be the result of the
human ability (Esteva et al., 2017). Frey and Osborne decision process of an AI system when the information
(2017) analyzed over 700 different jobs regarding their computed is used to restrict or direct human behavior.
potential for replacement and automation, finding that The decision process of AI is rational and depends on
47 percent of the analyzed jobs are at risk of being the baseline programming. Without the access to emo-
completely substituted by robots or algorithms. This tions or a consciousness, decisions of an AI algorithm
substitution of workforce can have grave impacts on might be good to reach a certain specified goal, but
unemployment and the social status of members of might have unintended consequences for the humans
society (Stone et al., 2016). involved (Banerjee et al., 2017).
Social acceptance and trust in AI is highly intercon- AI discrimination is a challenge raised by many
nected with the other challenges mentioned. researchers and governments and refers to the preven-
Acceptance and trust result from the extent to which tion of bias and injustice caused by the actions of AI
an individual’s subjective expectation corresponds to systems (Bostrom & Yudkowsky, 2014; Weyerer &
the real effect of AI on the individual’s life. In the Langer, 2019). If the dataset used to train an algorithm
case of transparent and explainable AI, acceptance does not reflect the real world accurately, the AI could
may be high but if an individual encounters harmful learn false associations or prejudices and will carry
AI behavior like discrimination, acceptance for AI will those into its future data processing. If an AI algorithm
eventually decline (COMEST, 2017). To reduce is used to compute information relevant to human
822 B. W. WIRTZ ET AL.
decisions, such as hiring or applying for a loan or A common example in the AI context refers to techno-
mortgage, biased data can lead to discrimination logical unemployment through AI-based industry
against parts of the society (Weyerer & Langer, 2019). robots. Therefore, the AI and its corresponding appli-
Moral dilemmas can occur in situations where an AI cations and services can be regarded as the object of
system has to choose between two possible actions that regulation.
are both conflicting with moral or ethical values. Rule According to traditional regulation theory, the gov-
systems can be implemented into the AI program, but ernment is responsible for regulation to avoid market
it cannot be ensured that these rules are not altered by failures and prevent harm to members of society. Since
the learning processes, unless AI systems are pro- the infancy of public AI makes it nearly impossible to
gramed with a “slave morality” (Lin et al., 2008, explain and enact regulation based on past occurrence
p. 32), obeying rules at all cost, which in turn may or historical judgments, many governments seek to
also have negative effects and hinder the autonomy of base regulation on inherently normative elements
the AI system. such as human values and ethics to guide regulation
Compatibility of machine and human value judgment attempts (Cath et al., 2017). Normative regulation the-
refers to the challenge whether human values can be ory, which focuses on an ideal form of regulation and
globally implemented into learning AI systems without how to influence the market to become most efficient
the risk of developing an own or even divergent value (Den Hertog, 2012), therefore appears to be particularly
system to govern their behavior and possibly become suited to approach AI regulation and explain its basic
harmful to humans. To prevent this potential threat, modus operandi.
unchangeable rules would have to be implemented, From an interest-based regulation-theoretical per-
leading to less autonomy of the AI and the above- spective, governmental regulation is justified with the
mentioned problem of slave morality (Lin et al., protection of the public interest against other forms of
2008). The deliberations and findings from previous interest. Regulation is used to enable optimal allocation
literature show that implementing and applying AI of resources and a stable market for all participants or
involves great challenges and risks for the public, call- stakeholders involved (Baldwin et al., 2012), and thus to
ing for reasonable and beneficial governance or regula- prevent the occurrence of market failures. The deciding
tion concepts. regulator has to manage the different stakeholder
expectations to maximize the own benefit. In other
words, “regulators attempt to strike a balance that
Regulation theory as a basis for an AI
society is comfortable with through a constant learning
governance framework
process” (Rahwan, 2018, p. 10). In this way, an equili-
The main purpose of governance is to enable institu- brium between public and private interest is realized in
tions, society and other stakeholders to work together the regulatory process (Baldwin et al., 2012), which
and fulfill policy goals in a dynamic and changing thus represents an essential component of regulation
environment without grave interruptions or damage theory.
to society (Asaduzzaman & Virtanen, 2016). From To enhance the regulatory process, Boyd and Wilson
a political-economic point of view, the described nega- (2017) argue that the actors in this process should
tive interruption and adverse effects of AI may be collaborate to represent the knowledge and expertise
viewed as a market failure (Rahwan, 2018). Market of all interest groups. Accordingly, the different actors
failure is a core concept of regulation theory and refers in this policy-making process and their collaboration
to a state in a free market, where resources are not may play a significant role in the context of regulation.
efficiently allocated and not all costs and opportunities As regulation can have both positive and negative
are considered for all stakeholders in the market (De effects, the actors need to consider carefully the differ-
Geest, 2017). Market failure often results in harmful ent methods of regulation and implementation. The
situations and instability for society, leading civil sta- regulatory process therefore has to account not only
keholders to ask for regulation (Stemler, 2016). One for the challenges posed by AI, but also for possible
important factor contributing to market failures are increases in efficiency, reduction of work load and
externalities or external effects. Externalities are bene- other beneficial effects (Stone et al., 2016). Thierer
fits or costs that occur outside of the initial market et al. (2017, p. 54) argue that “[t]he benefits of AI
without any form of compensation or recognition technologies are simply too great for us to allow them
(Delucci, 2000). AI applications, technologies and ser- to be extinguished by poorly considered policy”. This
vices can cause such negative external effects and thus stresses the importance of a regulatory process able to
contribute to market failures (Rahwan, 2018). assess benefits, risks and challenges to generate
INTERNATIONAL JOURNAL OF PUBLIC ADMINISTRATION 823
a beneficial AI policy as an outcome. Based on the to structure new information and integrate it with
above-mentioned regulation-theoretical deliberations, all information gained before to build up
five core elements may be deduced for the development a representation of virtual knowledge. This knowl-
of an AI governance framework. These elements edge is analyzed for possible patterns afterwards to
include the reason for regulation, the object of regula- derive conclusions about the content of new and old
tion, the regulatory process itself, the actors involved in pieces of information. In the last step of acting, the
the process and the outcome of the regulatory process. gained information from the second step is used to
perform a certain action.
Depending on the purpose of the AI system in
An integrated AI governance framework
question, this action could refer to the adjustment of
The conceptual framework combines insights from AI the own learning algorithm, to the report of insights
challenges and governance literature with the implica- drawn from the data (e.g., financial report) or the
tions of regulation theory. The regulation-theoretical decision to act in a certain way, like steering an auto-
deliberations and core elements identified, determine mated car to the left side to evade a cyclist on the street.
the basic structure of the framework that consists of Since it is possible to encounter challenges of AI at each
five layers. (1) As AI technology, services and applica- of these steps, it is important to distinguish between
tions are able to cause market failures, they represent these different stages of processing to implement the
the objective of regulation (AI technology, services and right regulation method.
applications layer). (2) Market failure manifests itself
through an external effect of the AI technology and the
AI challenges layer
associated challenges posed to society (AI challenges
layer). (3) To counter possible negative effects, The reason for regulation are the challenges elicited by
a regulatory process is needed to assess costs and ben- the AI technology and services described above. The AI
efits as well as to evaluate the outcomes with and with- challenges layer is divided into three parts: AI society,
out regulation (AI regulation process layer). (4) At the AI ethics and AI law and regulation. Even though these
end of the process, policies, laws and other means of areas are interconnected, the framework treats them as
regulation are implemented to prevent or adjust the distinct parts, as each of them requires a different ana-
aspects leading to market failure (AI policy layer). (5) lysis of the underlying problems and different forms of
Given the great impact of regulation on society and its regulation. For example, laws can be implemented to
potentially negative effects, the affected stakeholders secure a certain degree of data privacy, but it is very
and representatives of public and private interest difficult to use a law to counteract all possible ethical or
groups should support the entire regulatory process moral dilemmas that can occur. Ethical questions are
(Collaborative AI governance layer). Figure 2 depicts mostly answered by acknowledging certain principles
the integrated AI governance framework with its indi- or norms that can be transformed or translated into
vidual layers. enforceable laws in the aftermath (IEEE, 2017).
as a whole. To evaluate the success of regulation, it is further actions are required to plan and document the
also necessary to define indicators for measuring the risk assessment as well as the evaluation and risk man-
risks and benefits of the planned regulation as well as agement to guide future decisions. At the end of this
its effects on different stakeholders. Following this, stage, the resources available are distributed and
INTERNATIONAL JOURNAL OF PUBLIC ADMINISTRATION 825
allocated among the stakeholders involved to enable improving industry standards or guidelines to promote
further actions. the disclosure of AI and data use, as well as explain-
The next step in the regulation process is the assess- ability and responsibility to prevent discrimination.
ment of risks, benefits and costs itself. Expert groups of Similar standards have been already in use for personal
the stakeholders collect the data necessary for risk care robots since the year 2014 (International
assessment. To provide a full view, a clear definition Organization for Standardization, 2014).
of the risks and benefits has to be elaborated to identify In the medium-term, more complex, regulatory
and measure them in a realistic environment. Details issues that are ethical in nature can be resolved.
are of great importance, as future regulation has Creating an environment of fairness and trust for AI
impacts on a social, economic and ethical level. technology and services, requires moral principles and
Experimental work or field research are appropriate ethical criteria, such as the Asilomar AI Principles
tools to gather appropriate data and to estimate the (Future of Life Institute, 2017) or the general principles
possible social and economic impact to obtain of AI (IEEE, 2017). Such ethical landmarks can guide
a comprehensive and thorough picture of the risks and evaluate the outcome of the regulatory process and
and benefits of AI. the generation of new and improved AI ethics, provid-
After the assessment is complete, the gathered data ing a mindset for the fair and beneficial development of
are evaluated. In this evaluation, the risks and benefits AI. In the long term, governments can implement new
are compared with each other to see who will be norms and laws to solve social and legal challenges of
affected in which way. In addition, the costs of both AI. Especially challenges concerning the workforce
potential courses of action, i.e. regulation or no regula- changes might require a longer investigation to clearly
tion are considered for the stakeholders. Thereafter, the picture the effects of robots and automation on jobs
government needs to decide what risks and costs are and employment rates, as well as the effect of possible
acceptable and what risks and harmful effects are severe changes in the legislation.
enough to justify interventions via regulation.
The last stage of the process is the risk management
or in the case of AI, the regulatory action. This involves Collaborative AI governance layer
a critical review of the outcome of the evaluation and This layer represents the actors in the regulatory pro-
the decision for a course of action. This decision is then cess and strongly relates to the regulatory process and
implemented and the best regulation is enacted and the resulting policies in the policy layer. Many research-
enforced to counter the risks and increase the benefits. ers acknowledge that this process not only has to
The different forms of regulation are discussed in the include government that enforces regulation, but also
next layer of the framework. The regulatory process is representatives or experts from private organizations,
not finished at this point but rather ends in an evalua- NGOs and agencies (Cath et al., 2017; Scherer, 2016).
tion of success of the implementation and potential To balance common and conflicting interests, it is
unintended side effects that could give rise to other necessary that different interest groups have a shared
challenges. This evaluation is performed by means of motivation or shared values of generating beneficial
indicators developed in the framing stage to evaluate effects with AI technology. A belief, trust and commit-
and monitor the results of the regulation. In addition, ment in this idea behind AI is needed to maximize
the regulatory process itself is analyzed to improve positive effects and acceptance of AI in society
future regulation attempts. (Smith, 2018).
This layer represents the concrete outcome of the Such collaboration could take many forms, like com-
regulation process. There are different kinds of regula- mittees, foundations or agencies. For instance, a new
tion that may serve as a countermeasure, depending on agency consisting of representatives of government and
the area in which the challenges occur. Because of the experts from organizations could take over responsibil-
high uncertainty and the great number of stakeholders ity for the regulatory process as a whole, providing
involved in the regulatory process, regulation of AI insights into current and future AI issues and making
might take a longer time (Scherer, 2016). policy proposals to government. This agency could also
According to the governance model of Gasser and govern the communication between private and public
Almeida (2017), the instruments to apply regulation organizations and propose standards and laws devel-
can be differentiated based on the time it takes to oped with the best knowledge of both the legislative
implement them. For a near-term timeframe, technical realm and developers or organizations in the field of AI
challenges can be addressed by introducing or applications (Scherer, 2016; Thierer et al., 2017).
826 B. W. WIRTZ ET AL.
Discussion and conclusion Similarly, the main concepts are strongly linked to
each other. The framework combines the formerly
The recent re-emergence of AI technology with its separately addressed issues of regulation (Beales et al.,
possibilities and risks has attracted increasing attention, 2017) and AI challenges (Power, 2016) to provide
raising the need for governance and regulation. Public a holistic view on these strong interconnections. For
administration can hardly keep up with the rapid devel- example, the AI challenges layer is strongly connected
opment of AI, which is reflected in the lack of concrete to the AI regulation process layer, as the challenges are
AI governance and legislation programs. While the the very reason regulation is needed. The regulation
challenges of AI and potential adverse effects on society process itself is linked to the challenges, as each chal-
have recently begun to come to attention of researchers, lenge needs a specific approach of regulation to reach
the issue of AI governance and regulation has been the best possible policies and to reap the benefits of AI
widely neglected so far and public administration without allowing it to harm society. Such interdepen-
research has failed to address this matter comprehen- dencies support an integrative approach to AI govern-
sively. The analysis of current attempts of governance ance and regulation; as otherwise, they may remain
and regulation of AI in the literature demonstrates that unseen leading to missed benefits or unintended side
respective strategies of governments and regulatory effects. In contrast to the few other AI governance
ideas from private organizations fail to provide models (Gasser & Almeida, 2017; Rahwan, 2018), this
a conclusive and concrete way on how to make and study also provides a detailed explanation for
implement new policies. Only few regulatory models of a regulatory process, in which measures of regulation
AI exist that support the strategic attempts by govern- are developed, evaluated and enacted. This process may
ments to find a balance between the support of unhin- serve as a guide and considerate procedure for govern-
dered progression and regulatory control. However, ment and public administration to enact policies in
these models lack theoretical foundation and neither response to the rapid diffusion of AI and its conse-
consider the challenges and risks of AI as causes of AI quences (Stone et al., 2016). While the regulatory pro-
governance nor explicitly relate to the context of public cess as such strongly focuses on the role of government,
administration. the framework also emphasizes the importance of col-
In response to these shortcomings and the need for laborative aspects of different stakeholders that need to
profound AI governance and regulation concepts, this work together to deliver the most beneficial outcome to
study proposes an integrated AI governance framework society. Therefore, the actors need a shared motivation
based on regulation theory that depicts the key ele- or common values to improve the technology and the
ments of AI governance and regulation in the context according rule system. The involvement of different
of public administration. The study contributes to pub- actors secures a balance between public and private
lic administration research by providing guidance for interests within this process.
implementing a comprehensive regulatory process and Another important reason for collaborations refers
a frame of reference for future research. The framework to the imbalanced distribution of knowledge with
helps to structure the heterogeneous and interdisciplin- regard to AI. As AI is a very profitable industry, private
ary field of AI and to build the groundwork for pro- organizations have a high personnel requirement and
moting its practical use in public administration, as well increasingly recruit respective experts to develop new
as its acceptance and beneficial application in society. applications and services. The low number of AI
Against this background, the study carries several experts available leads to high competition for talent
implications for research and practice. The starting among private and public organizations. Due to higher
point of the framework refers to the challenges and salaries and better job conditions, many AI experts join
adverse effects associated with AI. While some chal- private organizations and unintentionally produce
lenges can be viewed separately, many of them are a knowledge deficit in the public sector, slowing down
highly interconnected, imposing social, ethical and and aggravating the process of regulation at the same
legal issues at the same time (Thierer et al., 2017). For time (Sample, 2017). Therefore, governmental institu-
example, the use of facial recognition and video sur- tions should form collaborations with private organiza-
veillance to prevent criminal behavior can reduce crime tions to benefit from their advanced knowledge.
rates, but also interferes with citizens’ privacy and per- Such collaborations already exist and private organiza-
ceived freedom in public spaces (Power, 2016). Careful tions are supporting regulatory processes. For example,
consideration of such interventions is needed to apply the Information Technology Industry Council (ITI) acts
regulatory methods that protect society from respective as a representative for many private organizations in the
violations. technological field and supports the promotion of
INTERNATIONAL JOURNAL OF PUBLIC ADMINISTRATION 827
responsible, safe and liable AI technology and explicitly regulatory process, not only a close inspection of
invites governments to form public-private partnerships benefits is essential to conclude the usefulness of
to improve knowledge transfer and adapt to and prepare and the improvements attainable with AI, but also
for societal changes (ITI, 2017). With the increasing dif- an assessment of the costs and potential tradeoffs
fusion of AI technology, collaborations might need to associated with regulation (Thierer et al., 2017).
overcome national boundaries for regulatory efforts to Furthermore, as the integrated conceptual AI gov-
succeed. ernance framework is overarching in nature focusing
To enact beneficial AI policies, public officers addres- on the key elements of AI governance and regulation,
sing governance and regulation need to be aware of all it is not able to provide concrete guidelines and reg-
relevant aspects and must see the “big picture” of AI ulatory measures that can be used to address specific
governance. The framework provides a systematic and risks or challenges. This matter requires more detailed
comprehensive conceptual guide for public officers deal- research within the regulatory process, which could
ing with governance-related and regulatory issues of AI. take more time, as some consequences of regulation
For example, policy makers need to be aware of major may be difficult to measure. The further implementa-
technological developments in AI to be able to respond tion of rules or laws faces similar difficulties, as effects
quickly with appropriate policies. Detailed knowledge of of regulation may be hard to estimate in advance. For
every aspect is essential, as new inventions can elicit example, the increasing replacement of jobs by AI
a great variety of different AI challenges that affect society requires the development of a system that supports
and public officers themselves (Wirtz et al., 2019). An the now unemployed workers. While the idea of
opportunity to acquire this form of knowledge are a robot tax for industry or a guaranteed base income
exploratory expert interviews with public officials. for the unemployed might be appropriate (Stone
Furthermore, employing a systematic regulation process et al., 2016), it will certainly take some time to find
can increase the success of the policy outcome, as this regulatory solutions that are fair to all stakeholders.
approach ensures a comprehensive, thorough and reliable Another limitation refers to practical problems
assessment of all different perspectives on the issue in with the concept of regulation itself. Governmental
order to determine the best possible regulatory action regulation has always faced the problem of reactivity,
for public administration. This systematic approach also providing policies to problems that have already
accelerates the regulatory process by providing occurred in reality. In addition, the process of regula-
a structured plan on which steps are necessary to reach tion can take a very long time due to unclear pro-
the intended policy. blems, unprecise definitions and great bureaucratic
Forming collaborations is a vital aspect for success- efforts (Boesl & Bode, 2016). Although the framework
fully implementing new policies. To form collabora- provides short-, mid- and long-term policy sugges-
tions within the complex field of AI, it appears tions, it is impossible to define an exact time frame
reasonable to adapt already established collaborative without knowledge of the specific problem, its effects
formats. The field of open innovation, for instance, and all stakeholders involved. Future research could
provides a variety of models and ideas for collabora- address the question of time by interviewing experts
tions, such as ideation platforms (Kaplan & Haenlein, such as lawyers or judges to improve future planning
2010) to gather insights or lead-user approaches (Von and organization of the regulatory process. Overall,
Hippel, 2005) to identify the needs of citizens, which further research is essential to conceptually refine and
can be adapted by public officers to serve the regulatory empirically test the framework developed to improve
process of AI. the inchoate understanding of AI governance and
Despite the above-mentioned contributions, this regulation in the public realm.
study is also subject to some limitations, which may
represent promising starting points for future
research endeavors. While the framework focuses on References
the challenges and risks of AI as the cause for regula- AGCS. (2018). The rise of artificial intelligence: Future outlook
tion, it does not explicitly address the potential ben- and emerging risks. Allianz Group. https://www.agcs.alli
efits of AI in the public sector. The assessment of anz.com/content/dam/onemarketing/agcs/agcs/reports/
benefits, however, has to focus on the AI, its algo- AGCS-Artificial-Intelligence-Outlook-and-Risks.pdf
rithms, its area of application and affected stake- Ansip, A. (2017). Making the most of robotics and artificial
intelligence in Europe. European Commission. https://ec.
holders. In the framework, this kind of analysis is europa.eu/commission/commissioners/2014-2019/ansip/
part of the evaluation in the regulatory process itself. blog/making-most-robotics-and-artificial-intelligence-
However, as suggested in the evaluation step of the europe_en
828 B. W. WIRTZ ET AL.
Asaduzzaman, M., & Virtanen, P. (2016). Governance theories COMEST. (2017). Report of COMEST on robotics ethics.
and models. In A. Farazmand (Ed.), Global encyclopedia of UNESCO. http://unesdoc.unesco.org/images/0025/002539/
public administration, public policy, and governance (Vol. 65, 253952E.pdf
pp. 1–13). Springer International Publishing. https://doi.org/ Cuthbertson, A. (2018). Elon Musk and Stephen Hawking
10.1007/978-3-319-31816-5_2612-1 warn of artificial intelligence arms race. Newsweek.
Asimov, I. (1950). I, robot. Gnome Press. https://www.ttu.ee/ https://www.newsweek.com/ai-asilomar-principles-
public/m/mart-murdvee/Techno-Psy/Isaac_Asimov_-_I_ artificial-intelligence-elon-musk-550525
Robot.pdf De Geest, G. (Ed.). (2017). Encyclopedia of law and economics
Baldwin, R., Cave, M., & Lodge, M. (2012). Understanding (2nd ed.). Elgar. https://doi.org/10.4337/9781782547457
regulation: Theory, strategy, and practice (2nd ed.). Oxford Delucci, M. A. (2000). Environmental externalities of
University Press. motor-vehicle use in the US. Journal of Transport
Banerjee, S., Singh, P. K., & Bajpai, J. (2017). A comparative Economics and Policy, 34 (2), 135–168. http://www.jstor.
study on decision-making capability between human and org/stable/20053837
artificial intelligence. In B. K. Panigrahi, M. N. Hoda, Den Hertog, J. A. (2012). Economic theories of regulation. In
V. Sharma, & S. Goel (Eds.), Advances in intelligent systems R. van den Bergh & A. M. Pacces (Eds.), Encyclopedia of
and computing. nature inspired computing (Vol. 652, pp. law and economics: Vol. 9. Regulation and economics (Vol.
203–210). Springer Berlin Heidelberg. https://doi.org/10. 9, 2nd ed., pp. 25–95). Elgar.
1007/978-981-10-6747-1_23 Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M.,
Bataller, C., & Harris, J. (2016). Turning artificial intelligence Blau, H. M., & Thrun, S. (2017). Dermatologist-level classifica-
into business value. Today. https://www.accenture.com/ tion of skin cancer with deep neural networks. Nature, 542
t20160814T215045__w__/us-en/_acnmedia/Accenture/ (7639), 115–118. https://doi.org/10.1038/nature21056
Conversion-Assets/DotCom/Documents/Global/PDF/ Frey, C. B., & Osborne, M. A. (2017). The future of employ-
Technology_11/Accenture-Turning-Artificial-Intelligence- ment: How susceptible are jobs to computerisation?
into-Business-Value.pdf Technological Forecasting and Social Change, 114,
Beales, H., Brito, J., Davis, J. K., DeMuth, C., Devine, D., 254–280. https://doi.org/10.1016/j.techfore.2016.08.019
Dudley, S., Mannix, B., & McGinnis, J. O. (2017). Future of Life Institute. (2017). Asilomar AI principles. Future
Government regulation: The good, the bad, & the ugly, of Life Institute. https://futureoflife.org/ai-principles/
released by the regulatory transparency project of the feder- Gasser, U., & Almeida, V. (2017). A layered model for AI
alist society. The Federalist Society. https://regproject.org/ governance. IEEE Internet Computing, 21(6), 58–62.
wp-content/uploads/RTP-Regulatory-Process-Working- https://doi.org/10.1109/MIC.2017.4180835
GroupPaper.pdf Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E.,
Bergen, M. (2018). Google grapples with ‘horrifying’ reaction Hagner, M., Hofstetter, Y., van den Hoeven, J., Zicari, R.,
to uncanny AI tech. Bloomberg. https://www.bloomberg. Zwitter, A. (2017). Will democracy survive big data and
com/news/articles/2018-05-10/google-grapples-with- artificial intelligence. Scientific American. https://www.
horrifying-reaction-to-uncanny-ai-tech scientificamerican.com/article/will-democracy-survive-big-
Bleicher, A. (2017). Demystifying the black box that is ai: data-and-artificial-intelligence/
Humans are increasingly entrusting our security, health Heyns, C. (2014). Report of the special rapporteur on extrajudicial,
and safety to “black box” intelligent machines. Scientific summary or arbitrary executions, Christof Heyns. Human Rights
American. https://www.scientificamerican.com/article/ Council of the United Nations General Assembly. https://digi
demystifying-the-black-box-that-is-ai/ tallibrary.un.org/record/771922/files/A_HRC_26_36-EN.pdf
Boesl, D. B. O., & Bode, M. (2016). Technology governance. IEEE. (2017). Ethically aligned design: A vision for prioritizing
In 2016 IEEE international conference on emerging technol- human well-being with autonomous and intelligent systems,
ogies and innovative business practices for the transforma- version 2. IEEE. https://standards.ieee.org/content/dam/ieee-
tion of societies (EmergiTech). IEEE. standards/standards/web/documents/other/ead_v2.pdf
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial International Organization for Standardization. (2014). ISO
intelligence. In K. Frankish & W. M. Ramsey (Eds.), The 13482:2014: Robots and robotic devices - safety requirements
Cambridge handbook of artificial intelligence (pp. 316– for personal care robots. International Organization for
334). Cambridge University Press. Standardization. https://www.iso.org/standard/53820.html
Boyd, M., & Wilson, N. (2017). Rapid developments in arti- ITI. (2017). Artificial intelligence policy principles. ITI. https://www.
ficial intelligence: how might the New Zealand government itic.org/public-policy/ITIAIPolicyPrinciplesFINAL.pdf
respond? Policy Quarterly, 13(4), 36–44. https://doi.org/10. Kaplan, A. M., & Haenlein, M. (2010). Users of the world,
26686/pq.v13i4.4619 unite! The challenges and opportunities of social media.
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Business Horizons, 53(1), 59–68. https://doi.org/10.1016/j.
Floridi, L. (2017). Artificial intelligence and the ‘good bushor.2009.09.003
society’: The US, EU, and UK Approach. Science and König, A., Kuiper, H. A., Marvin, H. J. P., Boon, P. E., Busk, L.,
Engineering Ethics, 24(2), 505–528. https://doi.org/10. Cnudde, F., Cope, S., Davies, H. V., Dreyer, M., Frewer, L. J.,
1007/s11948-017-9901-7 Kaiser, M., Kleter, G. A., Knudsen, I., Pascal, G., Prandini, A.,
Coles, T. (2018). How GDPR requirements affect AI and data Renn, O., Smith, M. R., Traill, B. W., Voet, H. V. D., Vos, E., &
collection. ITPro Today. http://www.itprotoday.com/risk- Wentholt, M. T. A. (2010). The SAFE FOODS framework for
and-compliance/how-gdpr-requirements-affect-ai-and- improved risk analysis of foods. Food Control, 21(12),
data-collection 1566–1587. https://doi.org/10.1016/j.foodcont.2010.02.012
INTERNATIONAL JOURNAL OF PUBLIC ADMINISTRATION 829
Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military in 2030. One hundred year study on artificial intelligence:
robotics: Risk, ethics, and design. San Luis Obispo, CA: Report of the 2015-2016 study panel. Stanford, CA:
California Polytechnic State University. Stanford University. https://ai100.stanford.edu/2016-report
McGinnis, J. O. (2010). Accelerating AI. Northwestern Thierer, A., O’Sullivan, A., & Russell, R. (2017). Artificial
University Law Review, 104(3), 1253–1269. https://doi. intelligence and public policy. Arlington, VA: Mercatus
org/10.2139/ssrn.1593851 Center George Mason University. https://www.mercatus.
Mills, T. (2018). The impact of artificial intelligence in the org/system/files/thierer-artificial-intelligence-policy-mr-
everyday lives of consumers. Forbes. https://www.forbes. mercatus-v1.pdf
com/sites/forbestechcouncil/2018/03/07/the-impact-of- Veale, M., van Kleek, M., & Binns, R. (2018). Fairness and
artificial-intelligence-in-the-everyday-lives-of-consumers/ accountability design needs for algorithmic support in
Power, D. J. (2016). “Big brother” can watch us. Journal of high-stakes public sector decision-making. In Chi (Ed.),
Decision Systems, 25(51), 578–588. https://doi.org/10.1080/ Proceedings of the 2018 CHI conference on human factors in
12460125.2016.1187420 computing systems (pp. 1–14). ACM. https://doi.org/10.
Rahwan, I. (2018). Society-in-the-loop: Programming the algo- 1145/3173574.3174014
rithmic social contract. Ethics and Information Technology, Von Hippel, E. (2005). Democratizing Innovation. MIT Press.
20(1), 5–14. https://doi.org/10.1007/s10676-017-9430-8 Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent,
Sample, I. (2017). Big tech firms’ AI hiring frenzy leads to explainable, and accountable AI for robotics. Science
brain drain at UK universities. The Guardian. https://www. Robotics, 2(6), eaan6080. https://doi.org/10.1126/scirobo
theguardian.com/science/2017/nov/02/big-tech-firms- tics.aan6080
google-ai-hiring-frenzy-brain-drain-uk-universities Wang, W., & Siau, K. (2018). Artificial intelligence: A study
Scherer, M. U. (2016). Regulating artificial intelligence sys- on governance, policies, and regulations. In MWAIS 2018
tems: Risk, challenges, competencies, and strategies. proceedings. Association for Information Systems. http://
Harvard Journal of Law & Technology, 29(2), 354–400. aisel.aisnet.org/mwais2018/40
https://dx.doi.org/10.2139/ssrn.2609777 Weyerer, J. C., & Langer, P. F. (2019). Garbage in, garbage
Sirosh, J. (2017). Delivering AI with data: The next generation of out: The vicious cycle of AI-based discrimination in the
the microsoft data platform. Microsoft. https://blogs.technet. public sector. In Y.-C. Chen, F. Salem, & A. Zuiderwijk
microsoft.com/dataplatforminsider/2017/04/19/delivering-ai- (Eds.), 20th annual international conference on digital gov-
with-data-the-next-generation-of-microsofts-data-platform/ ernment research - dg.o 2019 (pp. 509–511). ACM Press.
Smith, B. (2018). Facial recognition technology: The need for https://doi.10.1145/3325112.3328220
public regulation and corporate responsibility. Microsoft. Wirtz, B., Weyerer, J., & Geyer, C. (2019). Artificial
https://blogs.microsoft.com/on-the-issues/2018/07/13/ Intelligence and the Public Sector – Applications and
facial-recognition-technology-the-need-for-public- Challenges. International Journal of Public Administration,
regulation-and-corporate-responsibility/ 42(7), 596–615. https://doi.org/10.1080/01900692.2018.
Stemler, A. (2016). Regulation 2.0: The marriage of new 1498103.
governance and lex informatica. Vanderbilt Journal of Zheng, Y., Yu, H., Cui, L., Miao, C., Leung, C., & Yang, Q.
Entertainment & Technology Law, 19(1), 87–132. https:// (2018). SmartHS: An AI platform for improving govern-
dx.doi.org/10.2139/ssrn.2746229 ment service provision. In 32nd AAAI conference on arti-
Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., ficial intelligence. AAAI. https://www.aaai.org/ocs/index.
Hager, G., & Teller, A. (2016). Artificial intelligence and life php/AAAI/AAAI18/paper/view/16041/16369