2848 ArticleText 5843 1 10 20190307

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/332224635

Artificial Intelligence, Robotics, Ethics, and the Military: A Canadian


Perspective

Article in AI Magazine · March 2019


DOI: 10.1609/aimag.v40i1.2848

CITATIONS READS

25 1,539

2 authors, including:

Sherry Wasilow
Carleton University
7 PUBLICATIONS 75 CITATIONS

SEE PROFILE

All content following this page was uploaded by Sherry Wasilow on 31 March 2021.

The user has requested enhancement of the downloaded file.


Articles

Artificial Intelligence, Robotics,


Ethics, and the Military:
A Canadian Perspective

Sherry Wasilow, Joelle B. Thorpe

n Defense and security organizations rtificial intelligence is based on the assumption that
depend upon science and technology to
meet operational needs, predict and
counter threats, and meet increasingly
complex demands of modern warfare.
A aspects of human thought can be mechanized.
Although AI has existed for decades in philosophical
debate, mathematical models, and computer-science labs
(IEEE 2017; JASON 2017), it is only in the last five or six years
Artificial intelligence and robotics could
that a massive increase in computer-processing power, access
provide solutions to a wide range of mil-
itary gaps and deficiencies. At the same to a profusion of data, and advances in algorithmic tech-
time, the unique and rapidly evolving niques have together propelled AI to the forefront of public,
nature of AI and robotics challenges media, government, and military attention (DASA R&T
existing polices, regulations, and values, 2017). We anticipate that the relationship between AI and
and introduces complex ethical issues robotics1 will become interdependent with time, as robots
that might impede their development, become the hardware that use, for example, machine learn-
evaluation, and use by the Canadian
ing algorithms to perform a manual or cognitive task (UK
Armed Forces (CAF). Early considera-
Atomic Energy Authority 2016).2
tion of potential ethical issues raised by
military use of emerging AI and robotics AI can be both transformative and disruptive, due largely
technologies in development is critical to its dual-use properties (Allan and Chan 2017) and capa-
to their effective implementation. This bilities.3 The benefits can be numerous. Driverless cars are
article presents an ethics assessment anticipated to save hundreds of thousands of lives (Osoba
framework for emerging AI and robotics and Wesler 2017). AI already assists clinicians with medical
technologies. It is designed to help tech- diagnoses (Amato et al. 2013). Neural networks can scrutinize
nology developers, policymakers, deci- surveillance video and alert soldiers to specific frames that
sion makers, and other stakeholders
contain objects of interest such as vehicles, weapons, or per-
identify and broadly consider potential
ethical issues that might arise with the sons. Facial-recognition software could alert soldiers when an
military use and integration of emerg- individual of interest is observed in video surveillance or in
ing AI and robotics technologies of real time. AI might also help military personnel amalgamate
interest. We also provide a contextual and fuse large amounts of data from numerous sensors in a
environment for our framework, as well battlespace, and find relationships within the data, to help
as an example of how our framework make more informed and more rapid decisions than if the
can be applied to a specific technology. data were processed manually. Furthermore, AI-enhanced
Finally, we briefly identify and address
robotic systems can be given dull, dirty, and dangerous jobs,
several pervasive issues that arose dur-
ing our research. reducing physical risk to soldiers and enabling them to con-
centrate their efforts elsewhere.
Yet AI applications also raise a number of red flags. Facial-
recognition capabilities and databases for surveillance and
protection purposes can prompt individual privacy concerns.
Surveillance tools targeting criminals can also be used to col-
lect personal information on ordinary citizens or even to

Copyright © 2019, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602 SPRING 2019 37
Articles

commit intelligence espionage. For example, Project Mind8), and a number of industry players have cre-
Arachnid is an automated web crawler used by the ated the Partnership on AI to Benefit People and Soci-
Winnipeg-based Canadian Centre for Child Protec- ety to formulate best practices for the use of AI tech-
tion that detects online child sexual abuse images nologies.9 It is not clear, however, how much
and videos (Beeby 2018) and then sends a notice to cross-fertilization of ideas is taking place across aca-
the host service provider to have it removed — result- demia, layers of governance, and public and private
ing in nearly 700 removal notices every day. Con- sectors in Canada and elsewhere (House of Com-
versely, Edward Snowden used web-crawler software mons 2016). It is not clear if, and how much, con-
to collect roughly 200,000 top secret documents sensus exists regarding the ethics of AI.
from the US National Security Agency servers (Sanger Frameworks are guidance tools. In this case, a
and Schmitt 2014). In addition, the use of cyberspace framework on ethics can help invested parties iden-
for sharing news and opinions can be manipulated tify ethical issues that might be raised by the use of a
by “social bots” for the purposes of disinformation technology of interest. While several ethics frame-
and political agitation (Lazer et al. 2018). As AI works for emerging technologies currently exist (for
moves along the spectrum of technical sophistica- example, Wright [2011]), and some reports provide
tion in conjunction with an anticipated increase of in-depth examination of the potential ethical impact
autonomy, public concerns can increase. For exam- of AI-enabled robotics use by the military (Lin, Bekey,
ple, fear that AI technology is rapidly evolving and Abney 2008), to our knowledge there is no exist-
toward autonomy in weapons has led to opposition ing framework designed to be used as a tool to guide
to the development of so-called lethal autonomous scientists and policymakers in their ethics assess-
weapons systems (LAWS)4 and to spirited public ments for emerging AI and robotics technologies of
debate both within Canada and at meetings of the interest to the military.
Convention on Certain Conventional Weapons The framework we present consists of 12 broad cat-
(CCW) in Geneva. egories with guiding questions to help technology
Although civilian acceptance of AI in daily life has developers, policymakers, decision makers, and oth-
noticeably increased, its adoption in the military er stakeholders identify and broadly consider poten-
realm is much more complicated given the high tial military ethical issues that might arise with the
stakes involved. The difference in pace between the use and integration of specific emerging technologies
scientific development of these technologies and the of interest in the fields of AI and robotics. We believe
creation of policy to regulate their use can lead to that when ethics are considered early in the develop-
gaps when it comes to understanding the legal, ethi- ment process, potential ethical issues can be mitigat-
cal, and social implications of adopting these tech- ed by changes either to fundamental algorithmic
nologies for military purposes. While AI can provide design or in the creation of policies regulating tech-
a number of benefits in the areas of military surveil- nology use within a military or society. It is impor-
lance/intelligence, detection/protection, decision- tant to note that while this framework is designed to
making, and weapons, it is important to consider the help individuals identify potential military ethical
ethical implications of these technologies in advance issues, it is not designed to provide immediate solu-
of their use in order to mitigate potential issues on tions to these issues, advocate for or against the use
the battlefield before they occur. In 2016, the Office of any particular technology, make specific policy
of the Chief Scientist, Defence Research and Devel- recommendations, or rank the importance of ethical
opment Canada, initiated work on the ethical impli- issues.
cations of AI, which led to the creation of an ethics In the remainder of this article, we present our
assessment framework for emerging AI and robotics framework (and sample guiding questions), demon-
technologies in future military systems.5 strate the framework’s utility in identifying potential
ethical issues raised by an example technology area
Why an Ethics Framework? of interest (swarming), and discuss several overarch-
ing ethical issues raised by AI and robotics technolo-
Concurrent with rapid developments in AI technolo- gies.
gies, academic interest in the ethics of AI has grown
exponentially in the last several years — in confer-
ences,6 initiatives (MIT Media Lab 2017), longitudi- Ethics Assessment
nal studies (Stone et al. 2016), and principles and pol- Framework: Emerging AI
icy positions (Future of Life Institute 2017; IEEE
2016, 2017; Montreal Declaration7). Government
and Robotics Technologies
attention has increased in the US (Executive Office of The first three categories of our framework address
the President 2016a, 2016b), the European Union Canadian and international codes and norms. The
(European Parliament 2016), and France (Villani Defence Ethics Programme (DEP 2015) — a compre-
2018). Several private companies have established hensive values-based ethics program put in place to
their own ethics codes on AI (for example, Deep- meet the needs of Department of National Defence

38 AI MAGAZINE
Articles

and the Canadian Armed Forces (CAF), also known as could engage in conflict in a much more risk-free
the Canadian Forces (CF), at both the individual and way?
the organizational levels — is foundational. A key 3. Compliance with Law of Armed Conflict (LOAC)
component of the DEP is the DND and CF Code of and International Humanitarian Law
Values and Ethics (2014), which defines the values
Definition: International laws that must be followed
and behaviors to which Canadian military members during times of conflict.
must adhere. We have also included the internation-
LOAC is an international law that exists to protect
al rules that must be followed before and during
those affected by conflict and to regulate the means
times of conflict. The remaining nine categories
of warfare that are used (Solis 2016). LOAC includes
encompass ethical concerns that were identified by
jus in bello principles, which ensure that the means of
research as important to consider, but they do not
warfare are permissible and just. Several major prin-
necessarily fall under national or international laws
ciples are that a soldier must distinguish between
and norms.10 The majority of sample questions raised
combatants and noncombatants; that damage and
in each of the categories have been derived from the
loss of life in pursuit of a military objective must not
existing literature.
be excessive compared to the direct military advan-
1. Compliance with the DND and CAF Code of Val- tage gained by the action; that prisoners of war
ues and Ethics (POWs) must be treated humanely, and adversaries
Definition: Common values and expected behaviors who are injured or who surrender must not be tar-
that guide CAF members and DND employees. geted; that no means of war that are evil in them-
The code is made up of three principles and five selves — such as ethnic cleansing or rape — nor
values. The principles are respect the dignity of all excessive force, nor weapons banned by internation-
persons; serve Canada before self; and obey and sup- al law may be used; and that there must be no dis-
port lawful authority. The values are integrity, loyal- crimination of individuals based on gender, race, reli-
ty, courage, stewardship, and excellence. A sampling gion, or any other aspect of humanity. A sampling of
of questions related to this category: Could robotic questions related to this category: Could AI-
coworkers undermine group loyalty, cohesion, and enhanced autonomous systems11 effectively distin-
group effectiveness? Could the use of AI-enhanced guish between combatants and noncombatants?
technologies that enable soldiers to remain further Could AI-enhanced surveillance and detention capa-
removed from danger serve to devalue the military bilities such as robot guards be ethically used with
value of courage? Or could such use increase risk tak- POWs? If AI-enhanced weapons were able to target
ing? with far greater accuracy and precision than a human
soldier, leading to less collateral damage and fewer
2. Compliance with Jus Ad Bellum Principles
casualties, would it be ethical to avoid using these
Definition: Criteria to be met before entering a conflict weapons if they were developed?
so that all conflicts are justified.
4. Health and Safety Considerations
Just war theory (Wertheimer 2010) is a philosophy
Definition: Questions about the direct and indirect
of military ethics that aims to ensure that war is per-
impact of AI or robotic technologies on soldiers’ and
missible and fair. Generally speaking, jus ad bellum is civilians’ physical and psychological well-being.
the part of just war theory that includes principles
designed to ensure that all conflicts entered into are A sampling of questions related to this category:
justified. These are principles such as that the aim of Could ground robots lessen physical and psycholog-
ical injury to noncombatants? Further, could they be
a conflict must be for self-defense, and must not
safer because they will lack an immediate emotional
serve the narrow self-interests of the state but serve to
response to the death of a comrade that could lead to
reestablish peace; that conflict must be waged only
acts of revenge? Would unmanned aerial vehicle
by a legitimate authority; that there must be a rea-
(UAV) pilots operating thousands of miles away from
sonable expectation the conflict will achieve its
their targets be classified as combatants? Could the
desired outcome; that all nonviolent options must be
use of UAVs expand the theater of war and put more
tried before entering into a conflict; that a state’s
civilians’ safety at risk? Could the use and supervi-
response must be proportional to the threat received;
sion of, or responsibility for, multiple AI-enhanced
and that the intent of the conflict must be legitimate.
systems lead to cognitive overload on soldiers and
A sampling of questions related to this category:
place their safety and that of others at risk?
Could the use of AI-enhanced surveillance or
weapons technologies that reduce physical risk to 5. Accountability and Liability Considerations
soldiers lead to lowered barriers to entering conflict, Definition: Questions about risk and responsibility for
and could this violate the principle of last resort? AI- and robotic-technology failures, as well as unan-
Could a vast increase in technological asymmetry ticipated and/or undesired effects.
against our adversaries gained through use of AI and A sampling of questions related to this category:
robotics technologies be considered unethical and Who would be accountable for the decisions and
violate the principle of proportionality because we actions of semi- or fully autonomous systems as well

SPRING 2019 39
Articles

Surveillance/Intelligence Weapons Protection/Detection Decision-Making

Swarming
In nature, organisms such as ants can coordinate in large groups to perform functions not Defense and Security Ethical Issues
possible for individuals on their own
Swarming occurs when local interactions between individuals lead to “collective intelligence” 2) Compliance with Jus Ad Bellum
Inspired by nature, roboticists have built simple and tiny robots that can approximate 3) Compliance with LOAC
collective intelligence through complex functioning of the group as a whole
4) Health and safety
Swarms of unmanned aerial vehicles (UAVs) could be used for surveillance, intelligence and
reconnaissance, search and rescue, or offensive/defensive attacks — depending on the 5) Accountability and Liability
capabilities with which UAVs are equipped 6) Privacy, Confidentiality, and Security
The US military has several swarm programs currently underway including the US Navy’s 10) Reliability and Trust
Low-Cost UAV Swarming Technology (LOCUST) and DARPA’s Gremlins
11) Effect on Society
Another example is the swarming micro-UAVs called Perdix, which are autonomous, can be
launched from the ground or midair in swarms of up to 103, and can display collective 12) Preparedness for Adversaries
decision-making and coordination after launch

Proponents Say Opponents Say


Decentralized control of swarms allows for Protective swarms may be the best
flexibility, leading to resilience if some units defence against offensive swarm
are damaged attacks, which could lead to
Swarm creation can use small, simple, and proliferation of this technology (arms
race) The approach likely The approach likely The approach most
cheaper components
can be used. It could be used but likely cannot be
Distributed swarming systems can Emergent group behaviour could lead raises no, or raises ethical issues used — it raises
overwhelm adversaries to unpredictable actions minimal ethical to be addressed. serious ethical
Cognitive overload for soldiers may be Swarms that require concentrated issues. issues.
reduced through focus on the whole instead monitoring could lead to soldier
of individual units cognitive overload Policy Implications
Better maneuverability in dense urban areas The approach does Changes to existing There is currently
Reduced personnel requirements for missions not require any policies are no policy in place
change to existing required. for this approach or
policies or any the approach
specific policy. violates existing
policies.

Last Modified: 19-04-2018

Figure 1. Sample Chart on Swarming Technology Assessment.

as decision-support systems? The programmer? Man- from exploratory hacking that could discover and
ufacturer? Soldier? Commander? Government offi- exploit program weaknesses for later use?
cials? Given that robots could have better situational 7. Equality Considerations
awareness by being able to see through walls, see in
Definition: Questions about the influence of AI and
the dark, or network with other computers — if sol-
robotics on fairness and functionality within the CAF
diers chose not to use these systems, leading to a civil- and society.
ian casualty, would the soldiers become liable due to
their choice? What would happen if a soldier disagrees A sampling of questions related to this category:
with a decision rendered by AI technologies? Could the use and distribution of AI capabilities lead
to changes in unit cohesion? (For example, if soldiers
6. Privacy, Confidentiality, and Security
were asked to work alongside AI-enhanced or
Considerations
autonomous robots with cameras that recorded sol-
Definition: Questions about sharing, storing, protecting, dier actions.) How could the military guard against
and using information obtained by AI technologies.
using AI containing algorithmic bias or stereotyping?
A sampling of questions related to this category: Would human and machine interactions be equal?
What expectation of privacy would soldiers have in Who or what would be in charge?
scenarios involving surveillance or data-collection
8. Consent Considerations
technologies? How would the acquired information
be used, stored, and protected? Could robots with Definition: Questions about consent to, or approval of,
AI technologies.
biometric capabilities, such as the detection of faces
from a distance or weapons under clothing or inside A sampling of questions related to this category: Do
a house, blur the line between surveillance and soldiers need to provide consent to observation by AI-
search (which requires a warrant)? How would we enabled surveillance technologies or to working with
ensure that training data — both private and open robots? Is it possible to obtain truly informed consent
sourced — for machine learning systems are safe from soldiers (or civilians) if AI can infer private or

40 AI MAGAZINE
Articles

undisclosed information, such as gender, from pub- driving it into a crowd?) Could new AI technology
licly available data such as online behavior? that enables realistic audio/visual impersonation be
9. Humanity Considerations used by our adversaries as propaganda or to spread
false information about our military actions? Could
Definition: Questions about the impact of AI and
robotic technologies on morality, personal responsi-
a nation’s development and use of AI-enhanced and
bility, and human dignity. robotics technologies contribute to an international
AI arms race?
A sampling of questions related to this category:
Could operating remotely piloted vehicles or super-
vising remote autonomous systems have the feel of a Application of Ethics
video game war, removing the emotional link to the Assessment Framework: Swarming
consequences of engaging in conflict? Could such
emotional distance encourage unethical behavior? Technologies Case Study12
Could it be considered inhumane? Alternately, could Here we provide a case study wherein we use our
physical distance from a battlefield give soldiers more framework to identify potential ethical issues raised
time and distance to make more calculated, deliber- by possible future military use of swarming tech-
ated, and humane decisions since they are not at risk nologies.13
of injury or death? Could a robot have the right to In nature, organisms such as ants can coordinate
act in self-defense, for example, to protect classified in groups of large numbers to perform functions not
information? What responses would become permis- possible for individuals on their own (Mlot, Tovey,
sible for a robot acting in self-defense? and Hu 2011). This swarming behavior results from
10. Reliability and Trust Considerations local interactions between individual entities that
Definition: Questions about trust in AI-enhanced tech- lead to collective intelligence and emergent group
nologies, and human and machine interactions. behavior (Couzin and Krause 2003). Inspired by
A sampling of questions related to this category: nature, roboticists have built simple robots that can
Could distrust in AI decision aids lead to battlefield exhibit this swarming behavior (Rubenstein, Corne-
soldiers disregarding recommendations made by jo, and Nagpal 2014). Swarming capabilities could be
these systems? Conversely, if a robot and a soldier useful for military purposes if applied to UAVs. For
were to “disagree” on a course of action, could an example, swarms of UAVs could be used for intelli-
gence, reconnaissance, as well as — depending on
overabundance of trust in the system lead the soldier
capabilities — defensive and offensive purposes
to disregard his/her training and instincts? Could a
(Hurst 2017; Scharre 2014). Swarming capabilities
soldier mistakenly trust an AI-enhanced or robotic
have been developed and tested by the US military,
technology that has been hacked and is no longer
including Perdix (US Department of Defense 2017)
trustworthy?
and low-cost UAV swarming technology called
11. Effect on Society Considerations LOCUST (Smalley 2015). If adopted, swarming could
Definition: Questions about how the use of AI-enhanced offer several military advantages, including greater
and robotics technologies could affect civilians, and resilience due to decentralized control (Scharre
how civilians could respond to these technologies. 2014); the ability to overwhelm adversaries because
A sampling of questions related to this category: of a swarm’s distributed nature (Scharre 2014); and
Should AI and robotic technologies be regulated to superior maneuverability in dense urban areas or
the same degree in the civilian world as in the mili- other locations too dangerous for humans (Higgins,
tary world? Could using robotics in conflicts abroad Tomlinson, and Martin 2009). Despite these and oth-
hinder the ability of soldiers to connect with, and er advantages, there are ethical questions raised by
win the hearts and minds of, civilians on the ground? swarming technologies, identified with the help of
If robotic technologies were equipped with self- our framework in the following categories (refer-
destruct capabilities that are triggered when cap- enced by category number):
tured, and injured civilians, how could this influence Compliance with Jus Ad Bellum Principles (category 2).
civilian attitudes? Some questions related to jus ad bellum principles:
12. Considerations Regarding Could UAV swarming technology, which enables sol-
diers to remain farther from danger, lead to lowered
Preparedness for Adversaries
barriers to entering conflict. Could this violate the
Definition: Questions about the use of AI-enhanced principle of last resort? Could the use of swarming
technologies and robotics by our adversaries, and how technology, with its inherent advantages (that is,
our adversaries might view our use of these tools. greatly reduced risk to soldiers), against a less techno-
A sampling of questions related to this category: logically advanced adversary lead to a violation of the
Could our AI technologies be hacked or spoofed by principle of proportionality?
adversaries? (For example, could our robotics systems Compliance with Law of Armed Conflict and Internation-
be captured and reprogrammed to act against us, al Humanitarian Law (category 3). Some questions
such as hacking an unmanned ground vehicle and related to this category: Can swarms of UAVs used for

SPRING 2019 41
Articles

persistent surveillance distinguish between combat- gence on several issues and themes across different
ants and non-combatants? Or recognize an adversary emerging technologies that make use of AI or robot-
that has surrendered or is injured? ics, discussed below.
Health and Safety Considerations (category 4). Some
questions related to this category: Could swarms of
Privacy
UAVs lead to psychological injury to civilians on the A number of privacy issues can be raised for both sol-
ground who might feel “spied on”? Could soldiers diers and civilians through use of surveillance/intel-
tasked with supervising swarms become overwhelmed ligence and detection/protection technologies. The
and experience cognitive overload? Could this lead to collection, analysis, use, and sharing of personal data
mistakes that place soldiers or civilians on the ground
have become increasingly attractive features of AI
at risk?
systems, particularly for marketing and political pur-
Accountability and Liability Considerations (category 5). poses. Simply hiding or even deleting sensitive vari-
Some questions related to this category: If swarms dis- ables in the data-collection process often fails to
play emergent, unanticipated behavior (that is, they solve the problem, as machine learning methods are
“decide” to carry out orders without human input), capable of probabilistically inferring hidden variables
who would be held accountable for potentially nega-
(Campolo et al. 2017). In short, traditional expecta-
tive consequences? How would the use of swarm tech-
nologies be regulated if and when used by one coun-
tions of data privacy and anonymity might no longer
try that is part of an alliance or coalition? be realistic because modern machine learning algo-
rithms are capable of reidentifying data easily and
Privacy, Confidentiality, and Security Considerations (cat- robustly (Osoba and Welser 2017). How might this
egory 6). Some questions related to this category:
impact the use of soldiers’ personal information, for
Could a swarm of UAVs be hacked by adversaries to
example, if AI were used to identify patterns and
prevent it from acting (for example, jamming com-
munications capabilities), cause it to act against us, or make predictions about their mental-health status, in
obtain surveillance data? Could the pervasive use of the course of care during service? What happens to
swarms for persistent surveillance negatively impact this personal information when a soldier leaves the
the privacy of civilians? force? Could the technology be hacked, giving adver-
saries unauthorized access to sensitive information
Reliability and Trust Considerations (category 10). Some
questions related to treliability and trust: Can we trust
that could then be manipulated? These issues and
swarms of UAVs that display emergent behavior that others led to development of the European Union’s
is not programmed? Could emergent behavior lead to new data privacy regulation, General Data Protection
unpredictable actions of the swarm or unanticipated Regulation (GDPR), enacted on May 25, 2018, in
by-products of the behavior? order to protect EU citizens from privacy and data
breaches. GDPR compliance is becoming the de fac-
Effect on Society Considerations (category 11). Some
questions related to the effect of society: Could our use
to expectation worldwide.
of UAV swarms be viewed negatively by civilians in an Although news concerning Cambridge Analytica’s
area of conflict, and could this impact our ability to targeting of 50 million Facebook users for political
win their hearts and minds? How much input should purposes during the 2016 US election has garnered
the public and interest groups have regarding the use recent media attention (Rosenberg, Confessore, and
of swarming technology, particularly if used for offen- Cadwalladr 2018), a more concerning and underly-
sive purposes? ing issue is that “AI challenges current understand-
Considerations Regarding Preparedness for Adversaries ings of privacy and strains the laws and regulations
(category 12). A question related to this category: Can we have in place to protect personal information”
we defend against swarms of UAVs as this technology (Compolo et al. 2017, 28). While members of the
proliferates? military community might rely on Facebook, Twit-
We believe that early attention to ethical consid- ter, Instagram, and other social networks to stay con-
erations related to technologies of interest — with nected with families, friends, and current events,
the assistance of a framework such as the one pre- research shows that extremists, conspiracy theorists,
sented — can enable militaries to take advantage of and foreign actors use social media to spread subver-
the benefits offered by technologies such as swarm- sive disinformation to influence opinions and dis-
ing while avoiding potential negative consequences. cussion within the US military community (Gallach-
er et al. 2017).

Key Ethical Issues and Patterns: Bias


Emerging AI and Robotics Data are used to train AI software. Research has
shown that the amount of data used to train
Technologies in the Military Sphere machine learning algorithms has a greater effect on
While emerging AI and robotics technologies might prediction accuracy than the type of machine learn-
have associated ethical questions that our framework ing method used (Banko and Brill 2001). The central
can help identify, our research also revealed conver- role that data plays is one of the reasons that suc-

42 AI MAGAZINE
Articles

cessful companies such as IBM and Google are eager The Uber accident also raises questions about the
to acquire massive amounts of it. Google’s Chief Sci- statistics and foresight that have propelled auton-
entist, Peter Norvig, has been quoted as saying: “We omous technology forward. For example, experience
don’t have better algorithms than anyone else; we garnered from commercial aviation developments
just have more data” (Buchanan and Miller 2017, has shown there is often an increase in the rate of
13). Military systems likewise have access to massive adverse events when new automated systems are
amounts of data collected over decades. introduced (Airbus 2017). This statistic raises ques-
However, AI software is only as smart as the data tions about soldier safety and testing: How will out-
used to train it. Human-generated data labeling and comes for AI and robotics technologies that are gen-
algorithms can contain biases — and if the data sam- erated in a laboratory or safe and controlled sandbox
ple and labeling are biased, then so too will the out- areas be transferred to real-world scenarios where
puts be tainted. For example, a 2016 ProPublica rough terrain, obstacles, combatants, and debris
investigation (Angwin et al.) revealed that the COM- might complicate testing and place soldiers at risk
PAS program — an algorithm-based risk-assessment during trials (Anderson and Matsumura 2015)?
tool used to assess criminal risk in the US — was As machine learning systems become more power-
inherently biased against African Americans. Anoth- ful and central to society, so too might the potential
er 2016 study determined that facial-recognition harm from hacking become greater. If machine learn-
technology used for law-enforcement purposes in the ing algorithms are driving cars, guiding robots on
US disproportionately implicated African Americans patrol, and piloting systems, then not only are the
because they are disproportionately represented in safety and security stakes higher, but the response
mug-shot databases (Garvie, Bedoya, and Frankle speed of individual decisions will need to be faster as
2016). A more recent analysis of three commercial well. Hackers who compromise systems will have a
technologies that identify people of different races much greater capacity to do enormous damage more
and gender — owned by Microsoft, IBM, and Megvii quickly, while defenders might find it harder to iden-
of China — found that when the person in the pho- tify the threat or intervene in time (Buchanan and
to was a white man, the software was correct 99 per- Miller 2017). Finally, as military strategy evolves
cent of the time; however, the darker the skin, the toward greater human/machine teaming, the ramifi-
more errors arose, especially for darker-skinned cations of as-yet-unknown incompatibilities, pres-
women, who were scarcely represented in the system sures, and rights and responsibilities might arise. For
(Buolamwini and Gebru 2018). example, placing greater value on the use of AI-pow-
Given that algorithmic bias has been found in pri- ered swarming technologies in field operations could
vate industry, it might also exist within military data- risk the mental health of remote pilots who might
bases. How then can that data confidently be used for become overloaded (Chung 2018), which could then
AI training purposes? For decision support in foreign jeopardize the physical health and safety of civilians
and/or unfamiliar regions of the world that in no on the ground.
way are represented by the data being used to gener-
ate options? As has been noted by Immigration, Accountability and Responsibility
Refugees and Citizenship Canada: “Data is not neu- Emerging AI and robotics technologies are complex.
tral, nor can it be neutralized. Data will always bear When a complex or autonomous system fails or caus-
the marks of its history. In using data to train a sys- es unanticipated and/or undesired effects, it can be
tem to make recommendations or decisions, we must very difficult to determine the cause or ascribe
be fully aware of the workings of this history” (IRCC responsibility for the failure. While AI is a tool that
2018, 33). The aura of objectivity and infallibility can offload certain tasks from humans, it does not
that our culture ascribes to algorithms (Bogost 2015) possess the agency to ultimately take responsibility
is sadly misplaced and, in the case of military use, for recommendations, decision-making, or even its
could have serious and long-term implications. impact on decision-making processes.
Much of the current conversation concerning
Safety and Security accountability and AI-enabled systems has taken
The use of AI and robotic technologies raises ques- place at the far end of the machine-autonomy spec-
tions about soldiers’ and civilians’ physical and psy- trum, where the LAWS debate resides, and has
chological well-being, both domestically and inter- revolved around definitions such as “appropriate
nationally. For example, the March 2018 fatality human involvement” or “meaningful human con-
involving a pedestrian and an autonomous Uber car trol.”14 The US Department of Defense already rec-
in Tempe, Arizona, has led to intensified scrutiny of ognizes AI, both commercially derived and military-
autonomous vehicles on public roads (Coppola, specific, as a key enabling technology that will
Beene, and Hull 2018). What about the safety of sol- become integral to most future systems and plat-
diers and civilians in battlefield scenarios where forms as part of a “Third Offset Strategy” that seeks a
remotely piloted air and ground vehicles — and pos- unique, asymmetric advantage over near-peer adver-
sibly autonomous vehicles — are used? saries (JASON 2017). Furthermore, the US Center for

SPRING 2019 43
Articles

Strategic and International Studies recommends that, bility and liability issues. Employment of AI within
instead of “LAWS ‘never,’ our policy should be ‘not future battlespaces could create new and unexpected
until they can outperform human/machine intelli- operational risks, such as potential malfunctions,
gence collaboration,’ including making ethically adversarial interference and/or counterattacks, or
acceptable choices about when to ‘pull the trigger’” unexpected emergent behaviors (Scharre 2016). For
(Carter, Kinnucan, and Elliot 2018, 23). example, the recently developed algorithmic capaci-
However, there are day-to-day accountability issues ty to create indistinguishable counterfeits of audio
related to AI and robotics that should be addressed and video demonstrates how quickly new and unex-
long before dystopian scenarios, issues such as mal- pected threats can arise.15 Elsa B. Kania, who has
ware and the destruction it can cause, technology fail- written extensively on China’s aggressive use of AI in
ure and/or unintended activity, and the use of AI for the development of its future military might (2017),
law-enforcement purposes or social monitoring. For has noted the country’s focus on reliability consider-
example, robotic police officers debuted in Dubai in ations, quoting a Chinese Academy of Sciences
2017 (Cellan-Jones 2017). If these robots were to car- researcher: “What the military cares most about is
ry weapons, new questions would arise about how to not fancy features. What they care most is the thing
determine when the use of force is appropriate (Cam- does not screw up amid the heat of a battle” (2018).
polo et al. 2017). China is creating a pervasive algo-
rithmic surveillance system designed to produce a Trust
“citizen score” (Mitchell and Diamond 2018). How Trust has historically been a social contract, based on
will democratic societies that are building smart cities our understanding of how people around us think
that incorporate similar surveillance technologies use, and our experiences of their behaviors toward us and
analyze, and store collected citizen data? Further- others. AI-enhanced technologies and human-
more, if a soldier has been teamed with an AI- machine interactions can challenge that convention.
enhanced robot that fails or is hacked, will they be In the civilian sector, trust seemingly exists every-
accountable for the actions of the robot? Will they be where. Consumers invite virtual personal assistants
expected to intervene as best they can and, if they such as Amazon’s Alexa and the Internet of Things
don’t, will they be held liable for the consequences? (IoT) into their personal living spaces; travelers
And if so, why would soldiers agree to partnering with assume that it is safe to journey on airplanes
these systems if they could be blamed for actions they equipped with autopilot; and patients trust in certain
cannot necessarily predict? types of data-driven medical diagnoses and treat-
ment options. Civilians can even place too much
Reliability trust in AI and robotics to the point of risking their
It is not clear that reliability — defined as achieving security and physical safety (Booth et al. 2017).
the same performance under diverse conditions, In the military sector, however, human operators
whether in the lab or during field operations — cur- need to understand and trust AI enough to leverage
rently exists for a number of existing AI paradigms. it effectively in a combat role. Too much trust could
Any aura of scientific reliability might in fact be mean that soldiers do not sufficiently question AI
based on algorithmic flaws. Many current AI systems assistance. For example, during the 2003 invasion of
are frequently “brittle” — meaning their narrow Iraq, the downing of a British Tornado aircraft on
applications can generate “dumb results” when acti- March 23 (Loeb 2003) was found to be due to
vated or projected outside of initial constraints. “automation bias,” an unwarranted and uncritical
AI researchers are grappling with a replication crisis, trust in automation that led to control responsibility
a term coined close to two decades ago when being ceded to a machine (Hawley 2011). A subse-
researchers were facing a similar challenge in the quent internal army investigation criticized the Patri-
fields of chemistry, social psychology, medicine, and ot community culture for “reacting quickly, engag-
others (Baker 2016). According to Nicolas Rougier, a ing early, and trusting the system without question”
computational neuroscientist at France’s National (Hawley 2007, 4).
Institute for Research in Computer Science and Conversely, too little trust — often due to a lack of
Automation in Bordeaux, reproducibility is not guar- explainability or transparency — can likewise have
anteed just because AI applications are built by code tragic results. For example, the crash of Air France
(Hutson 2018). In addition, researchers often do not Flight 447 on June 1, 2009, killing all 228 people on
share their source code. While emerging movements board, was likely caused by pilot misunderstanding
have encouraged publishing algorithms, or making of AI-generated data — a problem of transparency
them open source, this approach has come under fire that likely would not have existed in a similar situa-
(see Brundage et al. [2018]) due to concerns that code tion on a simpler aircraft (Scharre 2016). Aware that
might be used by parties with nefarious intentions. future fighters need to understand, appropriately
The need for software engineering validation and trust, and effectively manage an emerging generation
verification is particularly acute for law enforcement of AI-machine partners, in 2017 the Defense
and military applications with respect to accounta- Advanced Research Projects Agency (DARPA) initiat-

44 AI MAGAZINE
Articles

ed the Explainable AI (XAI) program to and China16 — inform and influence Acknowledgements
help humans understand how AI values? What impact might less-ethical The authors wish to acknowledge Paul
works and why it reached the deci- use of AI technologies have on our Comeau, Chief Scientist, for his guid-
sion(s) it did (Gunning 2018). own ethical resolve? ance of, and support for, this project
Much of the contemporary energy and thank Anton Minkov, strategic
and discussion surrounding AI has
Summary and revolved around dire apocalyptic
analyst, Office of the Chief Scientist,
for his helpful comments. We also
Future Considerations warnings associated with a perceived wish to thank the Mitacs Canadian Sci-
inevitable march toward LAWS (Kerr ence Policy Fellowship Program. The
Technology is developing at a rapid
2017). It would be more productive if opinions expressed in this article are
pace, and ethical, social, and legal gaps
supporters and opponents of auton- those of the authors and do not repre-
are widening because of the slower
omous systems could come to an sent the official position of the Depart-
process of policymaking. Certainly, the
agreement on what capabilities actual- ment of National Defence, the Gov-
civilian world has ethical quandaries to
ly exist, and what they will reasonably ernment of Canada, or any of its other
face: an often-cited dilemma facing
self-driving cars is related to the philo- be in the future. Otherwise, devoting departments and agencies.
sophical “trolley problem” — referring disproportionate attention and
to an autonomous vehicle’s choice resources to an unlikely AI apocalypse
between killing five people on the could distract policymakers from Notes
tracks versus one person off to the side addressing AI’s more immediate chal- 1. Currently, robotics refers to machines that
(Lin 2016). However, the risk to lenges cited earlier in this article and, are capable of carrying out a series of
human lives associated with the mili- furthermore, discourage research on actions on behalf of humans, typically
AI’s numerous social and legal impacts. operating without possession of any AI.
tary use of AI, robotics, and machine
and deep learning for defense and Also pressing are threats posed by the 2. This connection between AI and robotics
adversarial use of AI technologies by is called the embodiment problem, and many
security purposes raises ethical con-
nonstate actors such as hackers, terror- researchers in AI agree that intelligence and
cerns to a much higher level.
ists, black marketeers, and drug cartels embodiment are tightly coupled issues
We propose our framework of ethi- (Baillie 2016).
cal considerations and questions as a as well as by competitive nation states,
likely necessitating proactive policies. 3. “Dual-use technologies like artificial
means to initiate an early and mean- intelligence or synthetic biology … have
ingful discussion. We further suggest There are many reasons why mili-
the potential to be used in both good and
that part of that baseline discussion taries should invest in AI and robotics
evil ways. While the technologies them-
will need to address the current lack of technologies — greater speed, accuracy selves are not the subject of treaties and
clear definitions and common lan- (and therefore civilian and soldier safe- conventions, we are now faced with con-
guage (National Academy of Sciences ty), efficiency, extended reach, multi- trolling the proliferation of weapons
2018). This might be a challenging first level coordination — but the human employing these technologies” (Latiff 2016,
step, given that AI as a field of research consequences of military actions 87).
has numerous subcomponents that necessitate early consideration of the 4. Artificial general intelligence is a propor-
coexist with a wide range of interested ethical implications of choices that tionately small and much more challenging
parties possessing varied and some- will be made. When ethical and policy research area within AI that seeks to build
analysts consider the repercussions of machines with general cognitive abilities
times opposing perspectives and termi-
machine learning advances, they are in that can go far beyond performing specific
nologies. tasks. It is AGI rather than AI that has gar-
The subsequent development of a essence trying to peer into the future:
nered high public visibility, uncertainty,
professional military code of ethics they are trying to plan for the world of and fear disproportionate to its size or suc-
and policy for AI will need to be a tomorrow by anticipating issues and cess (JASON 2017). While the Campaign to
thoughtful and inclusionary process acting today. We hope to contribute Stop Killer Robots is fueled largely by con-
that proceeds carefully toward policy some clarity to that unknown world of cerns about future LAWS possessing AGI-
development, regulation, and over- tomorrow by offering this framework like cognitive abilities that allow independ-
sight. Fundamental questions will of ethical considerations to technolo- ence from human control, AGI has not
need to guide the process: Which val- gy developers, policymakers, decision been developed and is considered unlikely
makers, and other stakeholders so they in the near future (Stone et al. 2016).
ues should inform the design of ethical
statements and standards for AI? How can identify and broadly consider 5. Ethics refer to the principles that govern
potential military ethical issues. It will a person’s behavior or their oversight of an
and by whom should these statements
activity — that is, questions about what we
and standards be implemented and take time to address the technical,
should or ought to do — as well as general
enforced? How do we measure or assess institutional, legal, and regulatory ele-
concerns related to social, political, legal,
AI performance? Who will be legally ments of a national or international AI and cultural impacts and risks (Lin, Bekey,
responsible? How should design values code of ethics, but acting early on eth- and Abney 2008).
be weighed or adjusted in the face of ical issues and gathering the endorse- 6. Such as the 2018 Artificial Intelligence,
conflict, and by whom? How will ment of key players is imperative in Ethics, and Society conference hosted by
behavior by different cultures and order to develop a cohesive and for- the Association for the Advancement of
political systems — specifically, Russia ward-thinking strategy. Artificial Intelligence and cohosted by the

SPRING 2019 45
Articles

Association for Computing Machinery, product that, with just 10 minutes of audio, to Update Fight Against Online Child Porn.
www.aies-conference.com. can exactly replicate a person’s voice in lim- CBC News, January 10, 2018. www.cbc.ca/
7. See the Montreal Declaration on the itless artificial audio (Carter, Kinnucan, and news/politics/child-pornography-online-
Responsible Development of Artificial Intel- Elliot 2018). sexploitation-rcmp-goodale-public-safety-
ligence, produced at the 2017 Forum on the 16. Notably, China has adjusted its strategic spencer-1.4477563.
Socially Responsible Development of Artifi- focus from yesterday’s informatized ways of Bogost, I. 2015. The Cathedral of Computa-
cial Intelligence, nouvelles.umontreal.ca/ warfare to tomorrow’s intelligentized war- tion. The Atlantic, January 15, 2015. www.
en/article/2017/11/03/montreal-declara- fare, for which AI will be critical (Kania theatlantic.com/technology/archive/2015/
tion-for-a-responsible-development-of-arti- 2017). Russia has already demonstrated its 01/the-cathedral-of-computation/384300.
ficial-intelligence/. willingness to engage in information war- Booth, S.; Tompkin, J.; Pfister, H.; Waldo, J.;
8. See deepmind.com/applied/deepmind- fare (Floridi and Taddeo 2014) during the Gajos, K.; and Nagpal, R. 2017. Piggyback-
ethics-society 2016 US presidential election and its ability ing Robots: Human-Robot Overtrust in Uni-
9. See www.partnershiponai.org. to target more than 10,000 Twitter users in versity Dormitory Security. In Proceedings of
the US Defense Department (Calabresi the 2017 ACM/IEEE International Conference
10. We have chosen to use a broad defini-
2017). on Human-Robot Interaction, 426–34. New
tion of ethics because evidence-informed
policymaking also has a broad base — con- York: Association for Computing Machin-
sidering elements of societal and political References ery. doi.org/10.1145/2909824.3020211.
pressures, resources, safety and security, and Allen, G. and Chan, T. 2017. Artificial Intel- Brundage, M.; Avin, S.; Clark, J.; Toner, H.;
cultural norms. ligence and National Security. A US Intelli- Eckersley, P.; Garfinkel, B.; Dafoe, A.;
11. Machine autonomy exists on a spec- gence Advanced Research Projects Activity Scharre, P.; Zeitzoff, T.; Filar, B.; et al. 2018.
trum. Our definitions specific to autonomy Study. Cambridge, MA: Belfer Center for Sci- The Malicious Use of Artificial Intelligence:
adopt the following approach. Semiau- ence and International Affairs, Harvard Forecasting, Prevention, and Mitigation. Work-
tonomous or human in the loop indicates that Kennedy School. shop Report. arXiv preprint arXiv:
a weapons system waits for human com- Airbus. 2017. A Statistical Analysis of Com- 1802.07228 [cs.AI]. Oxford, UK: Future of
mand and permission before taking action. mercial Aviation Accidents 1958-2016. Annu- Humanity Institute, Centre for the Study of
Supervised autonomy or human on the loop al Investigative Report. Blagnac Cedex, Existential Risk, Centre for the Future of
refers to systems that may track, target, and France: AIRBUS S.A.S. flightsafety.org/wp- Intelligence.
act defensively, but that are supervised by content/uploads/2017/07/Airbus-Commer- Buchanan, B., and Miller, T. 2017. Machine
humans who can monitor and, if necessary, c i a l - Av i a t i o n - A c c i d e n t s - 1 9 5 8 - 2 0 1 6 - Learning for Policymakers: What It Is and
intervene in the weapon’s operation, as 14Jun17-1.pdf. Why It Matters. The Cyber Security Project.
with, for example, the Phalanx Close-In Amato, F.; López, A.; Peña-Méndez, E. M.; Cambridge, MA: Harvard Kennedy School,
Weapons System, which is used to defend Vanhara, P.; Hampl, A.; and Havel, J. 2013. Belfer Center for Science and International
ships against incoming enemy missiles. Full Artificial Neural Networks in Medical Diag- Affairs.
autonomy or human out of the loop refers to nosis. Journal of Applied Biomedicine 11(2): Buolamwini, J., and Gebru T. 2018. Gender
when human input activates a weapon that 47–58. doi.org/10.2478/v10136-012-0031-x. Shades: Intersectional Accuracy Disparities in
then selects and engages targets without
Anderson, J. M., and Matsumura, J. M. 2015. Commercial Gender Classification. Proceed-
further operator intervention, for example,
Civilian Developments in Autonomous Vehi- ings of Machine Learning Research 81: 1–15.
the Harpy drone. Full autonomy that is
cle Technology and Their Civilian and Mili- Calabresi, M. 2017. Inside Russia’s Social
based on AGI refers to LAWS. While it is
tary Policy Implications. In Autonomous Sys- Media War on America. Time, May 18, 2017.
accurate to say there are a number of
tems: Issues for Defence Policymakers, edited by time.com/4783932/inside-russia-social-
weapon systems in existence today that can
Andrew P. Williams and Paul D. Scharre, 127– media-war-america.
perform independent actions, these systems
48. Technical Report AD10110077. The
act in accordance with a defined rule set Campolo, A.; Sanfilippo, M.; Whittaker, M.;
based on complex sensor(s) input, and thus, Hague, Netherlands: NATO Communications
and Crawford, K. 2017. AI Now 2017 Report.
would be better described as automated. and Information Agency.
Edited by A. Selbst and S. Barocas. New
12. Other technologies we have addressed Angwin, J.; Larson, J.; Mattu, S.; and Kirch- York: New York University, AI Now Insti-
in brief overviews include object recogni- ner, L. 2016. Machine Bias. ProPublica, May tute. assets.contentful.com/8wprhhvnpfc0/
tion, facial recognition, and gait recogni- 23, 2016. www.propublica.org/article/ 1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb
tion; using AI to monitor mental health; machine-bias-risk-assessments-in-criminal- 14f2b74b2be64c3ce0c78/_AI_Now_Insti-
sentiment analysis; AI for dis/misinforma- sentencing. tute_2017_Report_.pdf.
tion; robotic casualty evacuation; robotic Baillie, J. C. 2016. Why AlphaGo Is Not AI. Canada’s National Statement. 2016. Pre-
telesurgery; robotics and sensors for IED, IEEE Spectrum, March 17, 2016. spectrum. sented at the Experts Meeting on Lethal
explosive, and chemical detection; and ieee.org/automaton/robotics/artificial-intel- Autonomous Weapons Systems Conven-
smart cities. ligence/why-alphago-is-not-ai. tion on Certain Conventional Weapons
13. Please note that our research supports Baker, M. 2016. 1,500 Scientists Lift the Lid (CCW). Geneva, Switzerland, April 11–15.
future policy development, which is why we on Reproducibility. Nature 533(7604): 452– www.reachingcriticalwill.org/images/docu-
included a “policy implications” category. 54. doi.org/10.1038/ 533452a. ments/Disarmament-fora/ccw/2016 /meet-
14. Canada officially supports the term Banko, M., and Brill E. 2001. Scaling to Very ing-experts-laws/statements/11April_Cana-
“appropriate human involvement” (Cana- Very Large Corpora for Natural Language da.pdf.
da’s National Statement 2016), introduced Disambiguation. In Proceedings of the 39th Carter, W. A.; Kinnucan, E.; and Elliot, J.
in 2016 as a bridge between the terms Annual Meeting on Association for Computa- 2018. A National Machine Intelligence Strategy
“meaningful human control” and “appro- tional Linguistics, 26–33. San Francisco: Mor- for the United States: A Report of the CSIS Tech-
priate human judgment.” gan Kaufmann. nology Policy Program. Washington, DC: Cen-
15 In 2017, Adobe demonstrated a new Beeby, D. 2018. Liberal Government Looks ter for Strategic and International Studies.

46 AI MAGAZINE
Articles

Cellan-Jones, R. 2017. Robot Police Officer www.nitrd.gov/PUBS/national_ai_rd_strate- 2016–17. Together with formal minutes
Goes on Duty in Dubai. BBC News Technolo- gic_plan.pdf. relating to the report. London: UK Parlia-
gy, May 24, 2017. www.bbc.com/news/ Executive Office of the President. 2016b. ment, House of Commons, Science and
technology-40026940. Preparing for the Future of Artificial Intelli- Technology Committee. publications.par-
Chung, T. 2018. Offensive Swarm-Enabled gence. Washington, DC: National Science liament. uk/pa/cm201617/cmselect/
Tactics (OFFSET). Defense Advanced Re- and Technology Council, Committee on cmsctech/145/145.pdf.
search Projects Agency, Program Informa- Technology. obamawhitehouse.archives. Hurst, J. 2017. Robotic Swarms in Offensive
tion. Washington, DC: US Department of gov/sites/default/files/whitehouse_files/mic Maneuver. Joint Force Quarterly 87(4):105-11.
Defense. www.darpa.mil/program/offen- rosites/ostp/NSTC/preparing_for_the_futur Hutson, M. 2018. Artificial Intelligence
sive-swarm-enabled-tactics. e_of_ai.pdf. Faces Reproducibility Crisis. Science
Coppola, G.; Beene, R.; and Hull, D. 2018. Floridi, L., and Taddeo, M., eds. 2014. The 359(6377): 725–26. doi.org/10.1126/sci-
Arizona Became Self-Driving Proving Ethics of Information Warfare. Law, Gover- ence.359.6377.725.
Ground Before Uber’s Deadly Crash. nance and Technology 14. New York: Sprin- Immigration, Refugees and Citizenship
Bloomberg Technology, March 20, 2018. ger. doi.org/10.1007/978-3-319-04135-3. Canada (IRCC). 2018. Digital Transformation
www.bloomberg. com/news/articles/2018- Future of Life Institute. 2017. Asilomar AI at IRCC: Benefits, Risks and Guidelines for the
03-20/arizona-became-self-driving-proving- Principles. Winchester, MA: Future of Life Responsible Use of Emergent Technologies.
ground-before-uber-s-deadly-crash. Institute. futureoflife.org/ai-principles. White Paper. Ottawa, Canada: Government
Couzin, I. D., and Krause, J. 2003. Self-Orga- Gallacher, J. D.; Barash, V.; Howard, P. N.; of Canada, Strategic Policy and Planning.
nization and Collective Behavior in Verte- and Kelly, J. 2017. Junk News on Military JASON. 2017. Perspectives on Research in Arti-
brates. In Advances in the Study of Animal Affairs and National Security: Social Media ficial Intelligence and Artificial General Intelli-
Behavior, edited by P. Slater, J. Rosenblatt, C. Disinformation Campaigns Against US Mili- gence Relevant to DoD. Technical Report JSR-
Snowdon, and T. Roper, 1–75. Boston, MA:
tary Personnel and Veterans. Data Memo 16-Task-003. McLean, Virginia: The MITRE
Academic Press.
2017.9. Oxford, UK: University of Oxford, Corporation.
Defence Ethics Programme (DEP). 2015 Project on Computational Propaganda. Kania, E. B. 2017. Battlefield Singularity: Arti-
About the Defence Ethics Programme. comprop.oii.ox.ac.uk/research/working- ficial Intelligence, Military Revolution, and
Department of National Defence. www. papers/vetops/. China’s Future Military Power. CCW Report,
forces.gc.ca/en/about-policies-standards-
Garvie, C.; Bedoya, A.; and Frankle J. 2016. Ethical Autonomy Project. Washington,
defence-admin-orders-directives-7000/ 7023-
The Perpetual Line-Up: Unregulated Police DC: Center for a New American Security.
1.page#int.
Face Recognition in America. Washington, www.cnas.org/publications/ reports/battle-
Deputy Assistant Secretary of the Army for DC: Georgetown Law School, Center on field-singularity-artificial-intelligence-mili-
Research and Technology (DASA R&T). Privacy and Technology. www.law.george- tary-revolution-and-chinas-future-military-
2017. Emerging Science and Technology town.edu/privacy-technology-center/publi- power
Trends: 2017-2047. A Synthesis of Leading cations/the-perpetual-line-up/. Kania, E. B. 2018. Chinese Sub Command-
Forecasts. Unclassified Report. Providence,
General Data Protection Regulation ers May Get AI Help for Decision Making.
RI: FutureScout.
(GDPR). 2018. 2018 Reform of EU Data Pro- Defence One, February 12, 2018. www.
IEEE Global Initiative on Ethics of tection Rules. Brussels, Belgium: European defenseone.com/ideas/2018/02/chinese-
Autonomous and Intelligent Systems. 2016. Union, European Commission. ec.europa. sub-commanders-may-get-ai-help-decision-
Ethically Aligned Design: A Vision for Prioritiz- eu/commission/priorities/justice-and-fun- making/145906.
ing Wellbeing with Artificial Intelligence and damental-rights/data-protection/2018- Kerr, I. 2017. Weaponized AI Would Have
Autonomous Systems, Version 1. Piscataway, reform-eu-data-protection-rules_en. Deadly, Catastrophic Consequences. Where
NJ: Institute for Electrical and Electronics
Gunning, D. 2018. Explainable Artificial Will Canada Side? The Globe and Mail,
Engineers. standards.ieee.org/develop/ind-
Intelligence (XAI). Program Information. November 6, 2017. www.theglobeandmail.
conn/ec/autonomous_systems.html.
Washington, DC: Defense Advanced Re- com/opinion/weaponized-ai-would-have-
IEEE Global Initiative on Ethics of Auton-
search Projects Agency. www.darpa.mil/ deadly-catastrophic-consequences-where-
omous and Intelligent Systems. 2017. A
program/explainable-artificial-intelligence. will-canada-side/article36841036.
Vision for Prioritizing Human Well-Being with
Autonomous and Intelligent Systems, Version Hawley, J. K. 2007. Looking Back at 20 Years Latiff, R. H. 2016. Future War: Preparing for
2. Piscataway, NJ: Institute for Electrical and of MANPRINT on Patriot: Observations and the New Global Battlefield. New York: Alfred
Electronics Engineers. standards. ieee.org/ Lessons. Technical Report ARL-SR-0158. A. Knopf.
develop/indconn/ec/ead_ brochure _v2.pdf. Adelphi, MD: Army Research Laboratory. Lazer, D. M. J.; Baum, M. A.; Benkler, Y.;
www.arl.army.mil/arlreports/2007/ARL-SR- Berinsky, A. J.; Greenhill, K. M.; Menczer, F.;
European Parliament. 2016. European Civil
0158.pdf. Metzger, M. J.; Nyhan, B.; Pennycook, G.;
Law Rules in Robotics. Study prepared for the
European Parliament’s Committee on Legal Hawley, J. K. 2011. Not by Widgets Alone: Rothschild, D.; Schudson, M.; Sloman, S. A.;
Affairs. Brussels, Belgium: Policy Depart- The Human Challenge of Technology- Sunstein, C. R.; Thorson, E. A.; Watts, D. J.;
ment for Citizens’ Rights and Constitution- intensive Military Systems. Army Forces Jour- and Zittrain, J. L. 2018. The Science of Fake
al Affairs. www.europarl.europa.eu/RegDa- nal, February 1, 2011. www.armedforces News. Science 359(6380): 1094–6.doi.org/10.
ta/etudes/STUD/2016/571379/IPOL_STU% journal.com/not-by-widgets-al. 1126/science.aao2998.
282016%29571379_EN.pdf. Higgins, F.; Tomlinson, A.; and Martin, K. Lin, P. 2016. Why Ethics Matters for
Executive Office of the President. 2016a. The M. 2009. Threats to the Swarm: Security Autonomous Cars. In Autonomous Driving:
National Artificial Intelligence Research and Considerations for Swarm Robotics. Inter- Technical, Legal and Social Aspects, edited by
Development Strategic Plan. Washington, DC: national Journal of Advances in Security 2(2– M. Maurer, J. C. Gerdes, B. Lenz, and H.
National Science and Technology Council, 3): 288–97. Winner, 69-85. New York: Springer Nature.
Networking and Information Technology House of Commons. 2016. Robotics and doi.org/10.1007/978-3-662-48847-8_4.
Research and Development Subcommittee. Artificial Intelligence: Fifth Report of Session Lin, P.; Bekey, G.; and Abney K. 2008. Auton-

SPRING 2019 47
Articles

omous Military Robotics: Risk, Ethics, and Operational Risk. CCW Report, Ethical dian military and media relations through
Design. Investigative Report. Washington, Autonomy Project. Washington, DC: Cen- an examination of embedded reporting,
DC: US Department of the Navy, Office of ter for a New American Security. using Canada’s involvement in the
Naval Research. digitalcommons.calpoly. www.cnas.org/publications/reports/autono Afghanistan War as a case study. She also
edu/cgi/viewcontent. cgi?article=1001& mous-weapons-and-operational-risk. holds an MA in journalism from the Uni-
context=phil_fac. Smalley, D. 2015. LOCUST: Autonomous, versity of Texas at Austin, a graduate diplo-
Loeb, V. 2003. Patriot Downs RAF Fighter. Swarming UAVs Fly into the Future. Office ma in journalism from Concordia Universi-
The Washington Post, March 24, 2003. of Naval Research News and Media Center, ty, and a BA in political science from the
www.washingtonpost.com/archive/ poli- April 14, 2015. www.onr.navy.mil/Media- University of Calgary. Wasilow’s profession-
tics/2003/03/24/patriot-downs-raf-fight- Center/ Press-Releases/2015/LOCUST-low- al background includes communicating
er/d231ba70-080a-450b-a12c-ba6e4ee2e cost-UAV-swarm-ONR.aspx. addiction-science research to nonscientists,
10f/?utm_term=.a676939a4b29. health planning and policy, and media and
Solis, G. D. 2016. The Law of Armed Conflict:
legislative analysis. Wasilow investigated
MIT Media Lab. 2017. MIT Media Lab to International Humanitarian Law in War. Cam-
the ethical and policy implications of
Participate in $27 Million Initiative on AI bridge, UK: Cambridge University Press.
emerging AI and robotics technologies for
Ethics and Governance. MIT News, January doi.org/10.1017/CBO9781316471760.
the military as a policy analyst with
10, 2017. news.mit.edu/2017/mit-media- Stone, P.; Brooks, R.; Brynjolfsson, E.; Calo, Defence Research and Development Cana-
lab-to-participate-in-ai-ethics-and-gover- R.; Etzioni, O.; Hager, G.; Hirschberg, J.; da in the Department of National Defence.
nance-initiative-0110. Kalyanakrishnan, S.; Kamar, E.; Kraus, S.;
Mitchell, A., and Diamond, L. 2018. China’s Leyton-Brown, K.; Parkes, D.; Press, W.; Joelle B. Thorpe received her PhD in psy-
Surveillance State Should Scare Everyone. Saxenian, A.; Shah, J.; Tambe, M.; and chology, neuroscience, and behaviour from
The Atlantic, February 2, 2018. www.theat- Teller, A. 2016. 2016. Artificial Intelligence McMaster University in Hamilton, Ontario,
lantic.com/international/archive/2018/02/ and Life in 2030: One Hundred Year Study on Canada (2013), and has a master of science
china-surveillance/552203. Artificial Intelligence. Report of the 2015– degree in biology from Queen’s University
2016 Study Panel. Stanford, CA: Stanford in Kingston, Ontario, Canada (2009). Dur-
Mlot, N. J.; Tovey, C. A.; and Hu, D. L. 2011.
University. ing her time as a clinical research associate
Fire Ants Self-Assemble into Waterproof
UK Atomic Energy Authority. 2016. Written from 2014 to 2016, Thorpe developed an
Rafts to Survive Floods. Proceedings of the
Testimony Submitted by RACE, UK Atomic interest in ethics and sat as a board member
National Academy of Sciences of the United
Energy Authority (ROB0041). In Robotics on the Queen’s University and Affiliated
States 108(19): 7669-73. doi.org/10.1073/
and Artificial Intelligence: Fifth Report of Ses- Teaching Hospitals Health Sciences
pnas.1016658108.
sion 2016–17. London, UK: UK Parliament, Research Ethics Board. In 2016 and 2017,
National Academy of Sciences. 2018. The Thorpe completed a Mitacs Canadian Sci-
Frontiers of Machine Learning: 2017 Raymond House of Commons, Science and Technolo-
ence Policy Fellowship with Defence Re-
and Beverly Sackler U.S.-U.K. Scientific Forum. gy Committee. data.parliament.uk/ writ-
search and Development Canada in the
Washington, DC: The National Academies tenevidence/committeeevidence.svc/evi-
Department of National Defence, where she
Press. https://www.nap.edu/read/25021/ dencedocument/science-and-technology-c
investigated the ethical and policy implica-
chapter/1. ommittee/robotics-and-artificial-intelli-
tions of emerging human enhancement
gence/written/32640.htm.
Osoba, O., and Welser, W. 2017. An Intelli- technologies in the military. Thorpe is cur-
gence in Our Image: The Risk of Bias and Errors US Department of Defense. 2017. Depart- rently employed as a policy analyst in the
in Artificial Intelligence. Technical Report RR- ment of Defense Announces Successful Office of the Chief Scientist at Defence
1744-RC. Santa Monica, CA: RAND Corpo- Micro-Drone Demonstration. Press Release Research and Development Canada.
ration. doi.org/10.7249/RR1744. NR-008-17. January 9, 2017. Washington,
DC: US Department of Defense. www.
Rosenberg, M.; Confessore, N.; and Cadwal-
defense.gov/News/News-Releases/News-
ladr, C. 2018. How Trump Consultants
Release-View/Article/1044811/department-
Exploited the Facebook Data of Millions.
of-defense-announces-successful-micro-
The New York Times, March 17, 2018. www. drone-demonstration.
nytimes.com/2018/03/17/us/politics/cam-
Villani, C. 2018. For a Meaningful Artificial
bridge-analytica-trump-campaign.html.
Intelligence. Towards a French and Euro-
Rubenstein M.; Cornejo, A.; and Nagpal, R. pean Strategy. Paris: Villani Mission on Arti-
2014. Programmable Self-Assembly in a ficial Intelligence. www.aiforhumanity.fr/
Thousand-Robot Swarm. Science 345(6198): pdfs/MissionVillani_Report_ENG-VF.pdf
795–99. doi.org/10.1126/science.1254295.
Wertheimer, R., ed. 2010. Empowering Our
Sanger, D. E., and Schmitt, E. 2014. Snow- Military Conscience: Transforming Just War
den Used Low-Cost Tool to Best N.S.A. The Theory and Military Moral Education. Burling-
New York Times, February 8, 2014. ton, VT: Ashgate Publishing, Ltd.
www.nytimes.com/2014/02/09/us/snow-
Wright, D. 2011. A Framework for the Ethi-
den-used-low-cost-tool-to-best-nsa.html.
cal Impact Assessment of Information Tech-
Scharre, P. 2014. Robotics on the Battlefield nology. Ethics and Information Technology
Part II: The Coming Swarm. CCW Report, 13(3): 199-226. doi.org/10.1145/968261.
Ethical Autonomy Project. Washington, 968263.
DC: Center for a New American Security.
www.cnas.org/publications/reports/robot-
ics-on-the-battlefield-part-ii-the-coming- Sherry Wasilow received her PhD in com-
swarm. munication from Carleton University
Scharre, P. 2016. Autonomous Weapons and (2017), where her research addressed Cana-

48 AI MAGAZINE

View publication stats

You might also like