0% found this document useful (0 votes)
502 views78 pages

AI Corpsocresp Ed

Uploaded by

Roshaan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
502 views78 pages

AI Corpsocresp Ed

Uploaded by

Roshaan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/352897597

Artificial Intelligence and Corporate Social Responsibility: Employees' Key Role


in Driving Responsible Artificial Intelligence at Big Tech

Article  in  SSRN Electronic Journal · June 2021


DOI: 10.2139/ssrn.3873097

CITATIONS READS

0 1,109

1 author:

Susan von Struensee

22 PUBLICATIONS   57 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Susan von Struensee on 13 September 2021.

The user has requested enhancement of the downloaded file.


Artificial Intelligence and Corporate Social Responsibility

Susan von Struensee, JD, MPH


Global Research Initiative

Artificial intelligence (AI), autonomous systems, and robotics are digital technologies that impact us all today, 1and
will have momentous impact on the development of humanity and transformation of our society in the very near
future.2 AI is implicated in the fields of computer science, philosophy, law, economics, health, religion, ethics 3 and
more.
This paper discusses the emerging field of AI ethics, how the tech industry is viewed by some as using AI ethics
as window-dressing, or ethics-washing, and how employees have advanced corporate social responsibility and AI
ethics as a check to big tech, with governments and public opinion often following with actions to develop
responsible AI, such as in the aftermath of employee protests at Google, Amazon, Microsoft, Salesforce, and others.
Concepts of AI
The common denominator in all possible approaches to defining AI is that the expression refers to computerized
systems that simulate and/or enhance the cognitive capabilities of humans without constant and ongoing human
input.4
What differentiates various conceptions of AI is the way in which particular systems are initially set up in order to
go about accomplishing their “cognitive” work. 5 In most cases, current AI systems require human input for their
creation. Once operational however, they become capable of performing calculations and solving complex
problems without necessary ongoing human supervision.
Contemporary scholars have presented several different definitions of AI. AI is not a single technology, but rather
a set of techniques and sub-disciplines ranging from areas such as speech recognition and computer vision to
attention and memory. 6For example, MIT Professor Max Tegmark defines AI as “non-biological intelligence.”7
Google’s Ray Kurzweil has described AI as “the art of creating machines that perform functions that require
intelligence when performed by people.”8

1
Kunz, Martina and Ó hÉigeartaigh, Seán, Artificial Intelligence and Robotization. Robin Geiß and Nils Melzer (eds.),
Oxford Handbook on the International Law of Global Security (Oxford University Press, Forthcoming), Available at SSRN:
https://ssrn.com/abstract=3310421
2
Müller, Vincent C. (forthcoming 2019), ‘Ethics of artificial intelligence and robotics’, in Edward N. Zalta (ed.), Stanford
Encyclopedia of Philosophy (Palo Alto: CSLI, Stanford University). https://core.ac.uk/download/pdf/231877273.pdf
3
See e.g. Nils J Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (Cambridge: Cambridge
University Press, 2010) at 19; Pamela McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects
of Artificial Intelligence, 2nd ed (Massachusetts: A K Peters, Ltd, 2004) at 3-29 and Nick Bostrom & Eliezer Yudkowsky,
“The Ethics of Artificial Intelligence” in William Ramsay & Keith Frankish, eds, The Cambridge Handbook of Artificial
Intelligence (Cambridge: Cambridge University Press, 2014) at 316-334 (addressing the many ethical challenges AI poses to
society); Stuart Russel & Peter Norvig, eds, Artificial Intelligence: A Modern Approach (New Jersey: Pearson Education,
Inc, 2010) at 1034-1040 (for further reading on AI and Ethics).
4
Lauri Donahue, Commentary, A Primer on Using Artificial Intelligence in the Legal Profession, HARV. J. L. &
TECH. DIG., (2018) (“’Artificial Intelligence’ is the term used to describe how computers can perform tasks
normally viewed as requiring human intelligence, such as recognizing speech and objects, making decisions
based on data, and translating languages.”).
5
See generally Richard Bellman, An Introduction to Artificial Intelligence: Can Computers Think (San Francisco: Boyd &
Fraser Publishing Company, 1978); Raymond Kurzweil, The Age of Intelligent Machines (Cambridge: The MIT Press,
1990); Elaine Rich, Kevin Knight & Shivashankar Nair, Artificial Intelligence, 3rd ed (New Delhi: Tata McGraw-Hill
Publishing Company Limited, 2009)
6
Urs Gasser & Virgilio A.F. Almeida, A Layered Model for AI Governance, at 2
https://dash.harvard.edu/bitstream/handle/1/34390353/w6gov-18-LATEX.pdf?sequence=1
7
MAX TEGMARK, LIFE 3.0 BEING HUMAN IN THE AGE OF ARTIFICIAL INTELLIGENCE (2017).
8
RAY KURZWEIL, THE AGE OF INTELLIGENT MACHINES 14 (1992).

1
Technically, AI is mainly powered by machine learning algorithms, i.e., algorithms that change in response to their
own received inputs and consequently improve with experience. 9
Machine learning must be distinguished from deep learning. Deep learning algorithms consist of several non-
linearly connected layers (so-called neural networks) where each unit in the bottom layer takes in external data,
such as pixels of images for the purpose of face recognition systems, and then distributes that information up to
some or all of the units in the next layer. Each unit in that second layer then integrates its inputs from the first
layer, using a simple mathematical rule, and passes the result further up to the units of the next layer. 10
The input data accordingly passes through numerous layers of statistical data operations to produce the requested
output data. Based on statistical techniques, such output is—as is the case for all AI generated output—
probabilistic in nature.11 In view of the different layers being non-linearly connected with each other in the form
of neural networks, corresponding deep learning based processes become so complex that their decision-making
processes become entirely opaque, and therefore decisions ultimately taken by such systems cannot be
understood by humans anymore (the so-called black box effect). 12 The multi-layered approach allows
corresponding machines to not only follow pre-programmed decisions but also to respond to changes within their
environment. Examples of this technology include the facial recognition systems referred to above and autonomous
cars, which can make real-time decisions about speed and direction by administering sensor-based data without
input from a human user.13
While AI includes different categories, two types of AI are most important in the context of AI regulation. The first
is narrow AI, also known as weak AI. 14 Strong AI is associated with the claim that an appropriately programmed
computer could be a mind and could think at least as well as humans do. 15 Weak AI is associated with programs
that aid, rather than duplicate, human mental activities. 16
AGI is the current goal for many AI researchers. 17 For example, OpenAI, a nonprofit organization funding
pioneering research in the field, states on its website “OpenAI’s mission is to ensure that artificial general
intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically
valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also
consider our mission fulfilled if our work aids others to achieve this outcome.” 18 AGI has not been deployed.19
Some believe AGI is a very remote possibility.20

9
Axel Walz & Kay Firth-Butterfield, IMPLEMENTING ETHICS INTO ARTIFICIAL INTELLIGENCE: A
CONTRIBUTION, FROM A LEGAL PERSPECTIVE, TO THE DEVELOPMENT OF AN AI GOVERNANCE REGIME,
18 Duke Law & Technology Review 183 (2019)
https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1352&context=dltr
10
Id.
11
Id.
12
Id.
13
Id.
14
NILS J. NILSSON, THE QUEST FOR ARTIFICIAL INTELLIGENCE 388-389 (2010).
15
Id.
16
Id.
17
Id.
18
About OpenAI, OPENAI https://openai.com/about/
19
For an overview of ongoing AGI projects, see Seth D. Baum, A survey of artificial general intelligence projects for ethics,
risk, and policy, Global Catastrophic Risk Institute, Working Paper 17-1, November 2017,
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741
20
Eldred, Christopher and Zysman, John and Nitzberg, Mark, AI and Domain Knowledge: Implications of the Limits of
Statistical Inference (November 1, 2019). Available at SSRN: https://ssrn.com/abstract=3479479 (“Today's AI is certainly
powerful. But reliant as most of these systems are on machine learning, they have fundamental limitations. And these
limitations have implications for AI’s development, application, and impact in the coming years. Our second technology
briefing will describe machine learning’s fundamental nature as a form of statistical inference, and explore how this
constricts effective use of today’s AI to certain kinds of problem domains. Through the lens of this limitation, it will then
take a position in three debates over how AI will unfold. First, will AI’s diffusion throughout the economy and society be led
by firms with the most advanced AI technical capabilities? Second, will China’s access to unmatchable volumes of data lead
to a dominant position in global markets for AI tools? And third, is the creation of AI that can match or exceed human

2
AGI is also viewed as a threat to humanity21 and a grave cause for alarm, but is not here yet. 22

However, the implementation of narrow AI is here and is already disrupting modern industries worldwide. Artificial
intelligence is increasingly pervasive and essential to everyday life, enabling apps and various smart devices to
autonomous vehicles and medical devices. Yet, along with the promise of an increasingly interconnected and
responsive Internet of Everything, AI is ushering in a host of legal, social, economic, and cultural challenges. The
variety of stakeholders involved – governments, industries, and users around the world – presents complex
opportunities and governance questions for how best to facilitate the safe and equitable development, deployment,
and use of innovative AI applications. Regulators around the world at the state, national, and international levels
are actively considering next steps in regulating this suite of technologies, grappling with how their efforts can
build on and reinforce one another. This state of affairs points to the need for novel approaches to governance,
particularly among leading AI powers, including the United States, European Union, and China. 23
Beyond the basic benefits provided by improving everyday convenience, AI provides a multitude of additional
benefits to developers, governments, and users. The McKinsey Global Institute, for example, has estimated that
the economic value of “applying AI to marketing, sales and supply chains” could add up to some $2.7 trillion by
the 2030s. 24
To harness this potential, more than thirty nations have developed or are developing national AI strategies—many
of them since 2017. 25 Governments perceive AI as a key component of global influence, and for good reason.

intelligence, or Artificial General Intelligence (AGI), a foreseeable circumstance? Or is it a possibility only in the distant
future, if it is possible at all?”)
21
Haney, Brian, The Perils & Promises of Artificial General Intelligence (October 5, 2018). Brian S. Haney, The Perils &
Promises of Artificial General Intelligence, 45 J. Legis. 151 (2018). Available at SSRN: https://ssrn.com/abstract=3261254
or http://dx.doi.org/10.2139/ssrn.3261254 (“Artificial General Intelligence (“AGI”) - an Artificial Intelligence ("AI") capable
of achieving any goal - is the greatest existential threat humanity faces. Indeed, the questions surrounding the regulation of
AGI are the most important the millennial generation will answer. The capabilities of current AI systems are evolving at
accelerating rates. Yet, legislators and scholars have yet to address or identify critical issues relating to AI regulation. Instead,
legislators and scholars have focused narrowly on short term AI policy. This paper takes a contrarian approach to analyzing
AI regulation with a specific emphasis on deep reinforcement learning systems, a relatively recent breakthrough in AI
technology. Additionally, this paper identifies three important regulatory issues legislators and scholars need to address in the
context of AI development. AI and legal scholars have made the demanding need for an AI regulatory system clear.
However, those arguments focus on the regulation of current AI systems and generally ignore or dismiss the possibility of
AGI. Further, previous scholarship has yet to grapple specifically with the regulation of deep reinforcement learning
systems, which many AI scholars argue provides a direct path to AGI. Ultimately, legislators must consider and address the
perils and promises of AGI when developing and evolving AI regulatory frameworks.”)
22
Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global
Catastrophic Risk Institute Working Paper 17-1 , Available at SSRN
https://ssrn.com/abstract=3070741 or http://dx.doi.org/10.2139/ssrn.3070741 (“Artificial general intelligence (AGI) is AI
that can reason across a wide range of domains. It has long been considered the “grand dream” or “holy grail” of AI. It also
poses major issues of ethics, risk, and policy due to its potential to transform society: if AGI is built, it could either help
solve the world’s problems or cause major catastrophe, possibly even human extinction. This paper presents the first-ever
survey of active AGI R&D projects in terms of ethics, risk, and policy. A thorough search identifies 45 projects of diverse
sizes, nationalities, ethical goals, and other attributes. Most projects are either academic or corporate. The academic projects
tend to express goals of advancing knowledge and are less likely to be active on AGI safety issues. The corporate projects
tend to express goals of benefiting humanity and are more likely to be active on safety. Most projects are based in the US,
and almost all are in either the US or a US ally, including all of the larger projects. This geographic concentration could
simplify policymaking, though most projects publish open-source code, enabling contributions from anywhere in the world.
These and other findings of the survey offer an empirical basis for the study of AGI R&D and a guide for policy and other
action.”)
23
Shackelford, Scott J. and Dockery, Rachel, Governing AI (October 30, 2019). Cornell Journal of Law and Policy, 2020,
Available at SSRN: https://ssrn.com/abstract=3478244 or http://dx.doi.org/10.2139/ssrn.3478244
24
See, e.g., The Workplace of the Future, ECONOMIST (Mar. 28, 2018),
https://www.economist.com/leaders/2018/03/28/the-workplace-of-the-future (“In 2017 companies spent around $22bn on
AI-related mergers and acquisitions, about 26 times more than in 2015.”)
25
The countries with a national AI strategy are geographically, politically, and economically diverse. Some of these
plans are focused only on research and development to realize the potential of AI, while others are more
comprehensive plans of governance. For an few examples of the variety of national AI strategies, see e.g., New

3
According to a 2017 study by PricewaterhouseCoopers (PwC), AI is expected to contribute $15.7 trillion to the
global economy by 2030, constituting a 14 percent increase in global GDP and more than the 2017 output of China
and India combined. 26
One of AI’s unique attributes, and one of the reasons some believe it is launching society into the Fourth Industrial
Revolution,27 is its widespread application to a variety of sectors. AI technologies offer benefits in sectors ranging

Generation of Artificial Intelligence Development Plan, State Council, Translated by Flora Sapio, Weiming Chen,
and Adrian Lo (July 8, 2017), https://flia.org/wp-content/uploads/2017/07/A-New-Generation-of-ArtificialIntelligence-
Development-Plan-1.pdf (China’s AI Strategy); Towards an AI Strategy in Mexico: Harnessing the AI
Revolution, (June 2018), http://go.wizeline.com/rs/571-SRN-279/images/Towards-an-AI-strategy-in-Mexico.pdf
(Mexico’s AI Strategy); National Strategy for Artificial Intelligence, NITI Aayog (June 2018),
https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf (India’s
AI strategy); The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update, Select
Committee on Artificial Intelligence of the Nat’l Science & Tech. Council (June 2019),
https://www.whitehouse.gov/wp-content/uploads/2019/06/National-AI-Research-and-Development- Strategic-Plan2019-
Update-June-2019.pdf (United States’ AI Strategy); National Artificial Intelligence Strategy For Qatar, (Jan.
30, 2019), https://qcai.qcri.org/wp- content/uploads/2019/02/National_AI_Strategy_for_QatarBlueprint_30Jan2019.pdf
(Qatar’s AI Strategy). A common trend is for countries to develop an AI Task Force or
related entity prior to the development and publication of the official strategy. Estonia, Finland, Tunisia, Kenya, Sri
Lanka, Brazil, and several other nations currently have such task forces working on national AI strategies. Thomas
Campbell, Artificial Intelligence: An Overview of State Initiatives, FUTUREGRASP (July 2019),
http://www.unicri.it/in_focus/files/Report_AI-An_Overview_of_State_Initiatives_FutureGrasp_7-23-19.pdf
(describing the AI initiatives – both formal and informal – of 41 different nations).
26
AI to Drive GDP Gains of $15.7 Trillion With Productivity, Personalisation Improvements, PWC (June 27, 2017),
https://press.pwc.com/News-releases/ai-to-drive-gdp-gains-of--15.7-trillion-with-productivity--
personalisationimprovements/s/3cc702e4-9cac-4a17-85b9-71769fba82a6.
27
AI is the latest technological innovation and has been coined by some as the Fourth Industrial Revolution. As explained by
Klaus Schwab, Founder and Executive Chairman of the World Economic Forum:There are three reasons why today’s
transformations represent not merely a prolongation of the Third Industrial Revolution but rather the arrival of a Fourth and
distinct one: velocity, scope, and systems impact. The speed of current breakthroughs has no historical precedent. When
compared with previous industrial revolutions, the Fourth is evolving at an exponential rather than a linear pace. Moreover,
it is disrupting almost every industry in every country. And the breadth and depth of these changes herald the transformation
of entire systems of production, management, and governance.” Klaus Schwab, The Fourth Industrial Revolution: What It
Means, How to Respond, WORLD ECON. F. (Jan. 14, 2016), https://www.weforum.org/agenda/2016/01/the-fourth-
industrial-revolution-what-it-means-and-how-torespond/.

4
from health28 and medicine, 29
automotive30, transportation 31
marketing and retail, employment, financial services,
agriculture, etc.
Naveen Rao, Corporate Vice President and General Manager of the Artificial Intelligence Products Group at Intel
Corporation cited new examples of AI applications in nine industry verticals 32:
Consumer: Smart assistants, chatbots, search personalization, augmented reality, robots
Health: Enhanced diagnostics, drug discovery, patient care, research, sensory aids
Finance: Algorithmic trading, fraud detection, research, personal finance, risk mitigation
Retail: Support, experience, marketing, merchandising, loyalty, supply chain, security

28
AI is also estimated to dramatically reduce costs in the healthcare industry, with some estimates that AI will create up to
$269.4 billion in annual savings. AI and Healthcare: A Giant Opportunity, FORBES INSIGHTS (Feb. 11, 2019),
https://www.forbes.com/sites/insights-intelai/2019/02/11/ai-and-healthcare-a-giant-opportunity/#47c7eefe4c68 From
detecting skin cancer with a smartphone,to apps that can answer an array of medical questions already as well as physicians
eighty percent of the time, AI continues to make important strides in the healthcare context. See, e.g., Amanda Capritto, 4
Ways to Check for Skin Cancer with Your Smartphone, CNET (Sept. 16, 2019), https://www.cnet.com/news/how-to-use-
your-smartphone-to-detect-skin-cancer/. See also Christopher McFadden, These 7 AI-Powered Doctor Phone Apps Could
Be the Future of Healthcare, INTERESTING ENGINEERING (Jan. 24, 2019), https://interestingengineering.com/these-7-ai-
powered-doctor-phoneapps-could-be-the-future-of-healthcare. The improvements in health and medicine will be significant,
and that AI will dramatically change the healthcare landscape is certain.
29
Researchers at PwC have identified eight primary initiatives where AI can improve health and medicine: (1) helping
patients practice wellness on a day-to-day basis, (2) early detection of diseases, (3) faster and more accurate diagnoses, (4)
augmenting decision-making by doctors, (5) helping clinicians provide more comprehensive treatment, (6) improving end of
life care, (7) facilitating research, and (8) aiding in healthcare training. What Doctor? Why AI and Robotics Will Define New
Health, PWC (June 2017), https://www.pwc.com/gx/en/industries/healthcare/publications/ai-robotics-new-health/ai-robotics-
new-health.pdf.
30
Both public and private transportation stand to be revolutionized by AI technology. Modern vehicles commonly contain
several AI-assistive features, including brake assist, park assist, and lane-change assist. https://www.aitrends.com/ai-in-
business/advanced-car-safety-systems-using-ai-delivering-for-motorists-today/ The spectrum of autonomous vehicles, in
fact, extends from no automation through partial and conditional to full automation. See Alex Davies, The Wired Guide to
Self-Driving Cars, WIRED (Dec. 13, 2018), https://www.wired.com/story/guide-self-driving-cars/. Companies such as Ford,
General Motors, Tesla, Uber, and Waymo are investing in AI to develop driverless vehicles, with Waymo announcing over
10 million miles on public roads. Id. Ultimately, the promises of autonomous vehicles are numerous: improving traffic in
cities, lowering commutes, improving the efficiency of public transportation systems, and perhaps most importantly,
improving driver safety by reducing the number of accidents. See, e.g., Davola, Antonio, A Model for Tort Liability in a
World of Driverless Cars: Establishing a Framework for the Upcoming Technology (February 1, 2018). Idaho Law Review,
vol. 54, iss. 1, 2018., Available at SSRN: https://ssrn.com/abstract=3120679 or
31
Yet regulating such vehicles remains a challenge. Shubbak, Mahmood, Self-Driving Cars: Legal, Social, and Ethical
Aspects (December 18, 2013). Available at SSRN: https://ssrn.com/abstract=2931847 (“Road safety statistics show a large
number of casualties due to traffic accidents all over the world, which challenges the researchers and vehicle manufacturers
to develop new technologies to reduce. In the last few years, vehicles that are capable of navigating autonomously (without a
driver) have been developed and are expected to be released to the market in the near future. The testing of these cars is
legally allowed in three American states, and some tests are also being conducted in some European countries. This new
technology has many potential advantages; however the use of such cars in public roads can raise many questions. In this
paper, the legal, social, and ethical questions of self-driving cars are raised, in order to figure out to what extent the current
applicable laws and regulations can answer them. The legal questions are sorted into six groups; reliability, insurance,
regulations, behaviours, liability, and security questions, while the social and ethical questions are grouped into three
categories; privacy, behaviours, and social impact. Moreover, some legal ideas and recommendations for the future laws are
suggested. To achieve a better understanding of the topic, a brief description of the autonomous cars’ technology and
components is also provided.”)
32
See David Bollier, Artificial Intelligence, The Great Disruptor: Coming to Terms with AI-Driven Markets, Governance, and
Life, ASPEN INST., at 4 (2018), http://csreports.aspeninstitute.org/documents/AI2017.pdf
(identifying the three primary drivers of AI growth in today’s market as “dataset sizes, Moore’s law [computing efficiency],
and demand”)

5
Government: Defense, data insights, safety & security, resident engagement, smarter cities
Energy: Oil & gas exploration, smart grid, operational improvement, conservation
Transport: Autonomous cars, automated trucking, aerospace, shipping, search & rescue
Industrial: Factory automation, predictive maintenance, precision agriculture, field automation
Other: Advertising, education, gaming, professional & IT services, Telco/media, sports
AI is mainly powered by machine learning algorithms, i.e., algorithms that change in response to their own received
inputs and consequently improve with experience. 33
The legal industry is not exempt from this disruption, not only the substance of law, but how discovery in litigation
is performed, to review evidentiary documents and materials. 34 Indeed, AI technology assisted review (“TAR”) is
revolutionizing the discovery process.35 Litigators are now commonly called on by clients to establish e-discovery
relevancy hypotheses and to implement predictive coding models (a type of TAR) for the discovery of electronic
information. 36 In this process, litigators will first identify keywords to search and identify an initial set of documents
to be reviewed.37 Then, document review attorneys review, code, and score the initial set of documents based
on the occurrence of certain keywords in relation to a document’s relevance. 38
While the review takes place, e-discovery attorneys train and model supervised learning algorithms to classify
documents based upon the document review attorneys’ decisions in classifying documents in the initial set of
documents. The algorithm learns what documents are relevant by analyzing and replicating the decisions of real
attorneys. Additionally, predictive-coding models are capable of classifying millions of discoverable documents
based on relevance.39
A second example of an industry that is rapidly evolving due to AI is healthcare. Medical professionals practicing
in modern hospitals now store patient data in electronic databases with Electronic Healthcare Records (“EHRs”). 40
This allows machine learning algorithms to analyze patient healthcare data and drastically improve patient care.41
These data-driven resources not only allow a doctor to know virtually everything about a patient’s medical history
without ever meeting the patient, but also drastically reduce costs associated with healthcare by assisting in

33
Axel Walz & Kay Firth-Butterfield, IMPLEMENTING ETHICS INTO ARTIFICIAL INTELLIGENCE: A
CONTRIBUTION, FROM A LEGAL PERSPECTIVE, TO THE DEVELOPMENT OF AN AI GOVERNANCE REGIME,
18 Duke Law & Technology Review 183 (2019)
https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1352&context=dltr
34
Scott D. Cessar, Christopher R. Opalinski, & Brian E. Calla, Controlling Electronic Discovery Costs: Cutting “Big Data”
Down to Size, ECKERT SEAMANS (Mar. 5, 2013), https://www.eckertseamans.com/publications/controlling-electronic-
discovery-costs-cutting-big-data-down-to-size; see also Nicholas Barry, Man Versus Machine Review: The Showdown
Between Hordes of Discovery Lawyers and a Computer-Utilizing Predictive-Coding Technology, 15 VAND. J. ENT. &
TECH. L. 343, 344 (2013).
35
E.g., Ralph Losey, Analysis of Important New Case on Predictive Coding by a Rising New Judicial Star: “Winfield v. City
of New York”, e-Discovery Team, https://e-discoveryteam.com/2017/12/10/analysis-of-important-new-case-on-predictive-
coding-by-a-rising-new-judicial-star-winfield-v-city-of-new-york/
36
KEVIN D. ASHLEY, ARTIFICIAL INTELLIGENCE AND LEGAL ANALYTICS 240–42 (2017)
https://assets.cambridge.org/97811071/71503/frontmatter/9781107171503_frontmatter.pdf See also Ralph Losey, e-
Discovery Team, TAR Course, https://e-discoveryteam.com/tar-course/
37
Ralph Losey, e-Discovery Team, TAR Course, https://e-discoveryteam.com/tar-course/
38
GORDON V. CORMACK & MAURA R. GROSSMAN, EVALUATION OF MACHINE-L EARNING PROTOCOLS
FOR TECHNOLOGY-ASSISTED REVIEW IN ELECTRONIC DISCOVERY 154 (2014),
http://nysbar.com/blogs/EDiscovery/2014/12/16/Linked%20Here.pdf
39
Ralph Losey, e-Discovery Team, TAR, https://e-discoveryteam.com/doc-review/car/
40
Kate Monica, Apple EHR Patient Data Viewer Now in Use at 39 Health Systems, EHRINTELLIGENCE
(Apr. 2, 2018), https://ehrintelligence.com/news/apple-ehr-patient-data-viewer-now-in-use-at-39-health-systems.
41
See XIAOQIAN JIANG ET AL., A PATIENT-DRIVEN ADAPTIVE PREDICATION TECHNIQUE TO IMPROVE
PERSONALIZED RISK ESTIMATION FOR CLINICAL DECISION SUPPORT 137 (2012).
https://academic.oup.com/jamia/article/19/e1/e137/2909173

6
medical work. 42 For example, in 2016, researchers at Stanford developed AI that was able to diagnose lung
cancer more accurately than human pathologists. 43
AI disruption is also occurring in the defense industry. AI is already an essential tool in cybersecurity.44 On March
2, 2017, a counterintelligence report was issued to the White House stating that Russian programmers launched
an AI cyber-attack on the Twitter accounts of over 10,000 employees at the Department of Defense. 45 Additionally,
AI is used on the battlefield in modern warfare settings. 46 For example, the U.S. Phalanx missile-defense system
for naval ships uses AI to detect, track, and attack threats from enemy missiles and aircraft. Terrorist misuse of
commercial AI systems is a serious concern. 47 Terrorists are already using AI systems in drones to deliver
explosives and cause crashes.48
Legal and Ethical Challenges
One approach to mitigating or managing the concerns mentioned above is through the legal system, yet the
judiciary is managing its own tensions with applications of AI. 49 AI is exacerbating many of the same challenges
that new technologies bring to existing bodies of law, including competition law 50 and information security and
privacy law,51 while also creating new depth in issues such as negligence or products’ liability. 52
Children’s toys are not exempt from raising AI legal and ethical challenges, especially with respect to privacy.
Recently, toys have become more interactive than ever before. The emergence of the Internet of Things (IoT)
makes toys smarter and more communicative: they can now interact with children by "listening" to them and
respond accordingly. While there is little doubt that these toys can be highly entertaining for children and even
possess social and educational benefits, they raise concerns. 53

42
See Alvin Rajkomar et al., Scalable and Accurate Deep Learning with Electronic Health Records,
NATURE PARTNER J. (2018), https://www.nature.com/articles/s41746-018-0029-1.pdf.
43
See Lloyd Minor, Crunching the Image Data Using Artificial Intelligence to Look at Biopsies, STANFORD MED. (2017),
https://stanmed.stanford.edu/2017summer/artificial-intelligence-could-help-diagnose-cancer-predict-survival.html.
44
See MILES BRUNDAGE ET AL., THE MALICIOUS USE OF ARTIFICIAL INTELLIGENCE: FORECASTING,
PREVENTION, AND MITIGATION 12 (2018)
https://www.researchgate.net/publication/323302750_The_Malicious_Use_of_Artificial_Intelligence_Forecasting_Preventio
n_and_Mitigation
45
See Massimo Calbresi, Inside Russia’s Social Media War on America, TIME (May 18, 2017),
http://time.com/4783932/inside-russia-social-media-war-america/
46
See United States Navy Fact File: MK 15 – Phalanx Close-In Weapons System (CIWS), U.S. DEP’T
NAVY (last visited May 13, 2019), http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=487&ct=2.
47
Brundage M, Avin S, Clark J, Toner H, Eckersley P, Garfinkel B, Dafoe A, Scharre P, Zeitzoff T, Filar B, Anderson H. The
malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228. 2018 Feb
20. https://maliciousaireport.com/
48
Id.
49
Nicole Chauriye, Wearable Devices as Admissible Evidence: Technology is Killing Our Opportunity to Lie,
24 Cath. U. J. L. & Tech (2016).
Available at: https://scholarship.law.edu/jlt/vol24/iss2/9
50
Quaid, Jennifer, AI and Competition Law (November 2, 2020). in Florian Martin-Bariteau & Teresa Scassa, eds., Artificial
Intelligence and the Law in Canada (Toronto: LexisNexis Canada, 2021), Available at SSRN:
https://ssrn.com/abstract=3733948
51
Clifford, Damian and Richardson, Megan and Witzleb, Normann, Artificial Intelligence and Sensitive Inferences: New
Challenges for Data Protection Laws (December 14, 2020). Mark Findlay, Jolyon Ford, Josephine Seoh and Dilan
Thampapillai (eds.), Regulatory Insights on Artificial Intelligence: Research for Policy (Edward Elgar, 2021), ANU College
of Law Research Paper No. 21.1, Available at SSRN: https://ssrn.com/abstract=3754037 or
http://dx.doi.org/10.2139/ssrn.3754037
52
See, e.g., Jeffrey K. Gurney, Sue My Car Not Me: Products Liability and Accidents Involving Autonomous Vehicles, 2013
U. ILL. J.L. TECH. & POL’Y 247; F. Patrick Hubbard, “Sophisticated Robots”: Balancing Liability, Regulation, and
Innovation, 66 FLA. L. REV. 1803, 1803 (2014); Abbott, Ryan Benjamin, The Reasonable Computer: Disrupting the
Paradigm of Tort Liability (November 29, 2016). George Washington Law Review, Vol. 86, No. 1, 2018, Available at SSRN:
https://ssrn.com/abstract=2877380 or http://dx.doi.org/10.2139/ssrn.2877380
53
Haber, Eldar, Toying with Privacy: Regulating the Internet of Toys (December 8, 2018). 80 Ohio State Law Journal 399
(2019), Available at SSRN: https://ssrn.com/abstract=3298054

7
Beyond the fact that such toys could be hacked or simply misused by unauthorized parties, datafication of children
by toy conglomerates, various interested parties and perhaps even their parents could be highly troubling. It could
profoundly threaten children's right to privacy as it subjects and normalizes them to ubiquitous surveillance and
datafication of their personal information, requests, and any other information they divulge. 54
On February 17, 2017, the German Federal Network Agency banned a doll named Cayla55 from being sold, and
ordered the destruction of all devices which had already been sold.56The legal basis of this decision was § 148
(1) no. 2, 90 of the German Telecommunication Act. The rationale was that because of the doll’s connectivity to
its manufacturer (required because the doll was AI enabled), the doll was effectively a spy on the child, recording
all the data the child says to devices including their most precious secrets. 57
Likewise, the agency was concerned that the devices were hackable, exposing children to threats such as
pedophilia or ideological communications. Since then, the regulator has used the law to ban similar devices as
well as smart watches 58This strict approach adopted to protect children, one of the most vulnerable demographics,
has a further legal basis in Art. 16 (1) of the Convention on the Rights of the Child. According to this, “no child
shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or correspondence.59
AI-based devices interact autonomously with children and convey their own cultural values, this impacts on the
rights and duties of parents to provide, in a manner consistent with the evolving capacities of the child, appropriate
direction and guidance in the child’s freedom of thought, including aspects concerning cultural diversity.60
Due to the emergence of the Internet of Things, ordinary objects became connected to the internet, children can
now be constantly datafied during their daily routines, with or without their knowledge. IoT devices can collect
and retain mass amounts of data and metadata on children and share them with various parties—able to extract
data on where children are, what they are doing or saying, and perhaps even capture imagery and videos of them.
While Congress previously responded to rather similar privacy threats that emerged from the internet with the
enactment of the Children’s Online Privacy Protection Act61 (“COPPA”), this regulatory framework only applies to
a limited set of IoT devices—excluding those which are not directed towards children nor knowingly collect
personal information from them. COPPA is ill-suited to properly safeguard children from the privacy risks that IoT

54
Id.
55
MY FRIEND CAYLA, https://www.genesis-toys.com/my-friend-cayla
56
Press Release, Bundesnetzagentur Removes Children’s Doll “Cayla”From the Market, Bundesnetzagentur [BNetzA]
[German FederalNetwork Agency], (Feb. 2, 2017).
57
Kay Firth-Butterfield, Generation AI: What happens when your child's friend is an AI toy that talks back?, WORLD
ECONOMIC FORUM (May 20, 2018) https://www.weforum.org/agenda/2018/05/generation-ai-whathappens-
when-your-childs-invisible-friend-is-an-ai-toy-that-talks-back/.
58
See DakshayaniShankar, Germany Bans Talking Doll Cayla over Security, Hacking Fears, NBC NEWS (Feb. 18, 2017,
6:43 PM),http://www.nbcnews.com/news/world/germany-bans-talking-doll-cayla-over-security-hacking-fears-n722816;
Jane Wakefield, Germany Bans Children’s Smartwatches, BBC NEWS (Nov. 17, 2017), http://www.bbc.
com/news/technology-42030109.
59
United Nations Convention on the Rights of the Child, art. 16 (1), Nov. 20, 1989
https://www.ohchr.org/en/professionalinterest/pages/crc.aspx
60
See e.g. Norwegian Consumer Council (fn 179) referring to the connected dol Cayla (“Norwegian version of the apps has
banned the Norwegian words for “homosexual”, “bisexual”, “lesbian”, “atheism”, and “LGBT” […]” “Other censored words
include ‘menstruation’, ‘scientology-member’, ‘violence’, ‘abortion’, ‘religion’, and ‘incest’ ”); See Esther Keymolen and
Simone Van der Hof, ‘Can I still trust you, my dear doll? A philosophical and legal exploration of smart toys and trust’
(2019) 4(2) Journal of Cyber Policy 143-159
https://www.tandfonline.com/doi/pdf/10.1080/23738871.2019.1586970?needAccess=true (“Smart toys come in different
forms but they have one thing in common. The development of these toys is not just a feature of ongoing technological
developments; their emergence also reflects an increasing commercialisation of children’s everyday lives”); See Valerie
Steeves, ‘A dialogic analysis of Hello Barbie’s conversations with children’ (2020) 7(1) Big Data & Society,
https://journals.sagepub.com/doi/pdf/10.1177/2053951720919151
61
See Children’s Online Privacy Protection Act (COPPA), Pub. L. No. 106–70, 112 Stat. 2681 (1998) (codified as amended
at 15 U.S.C. §§ 6501–06 (2018))

8
entails, as it does not govern many IoT devices that they are exposed to.62 The “always-on” era, by which many
IoT devices constantly collect data from users, regardless of their age, exposes us all to great privacy risks.
The scene of robotics, however, is changing rapidly. There are numerous instances where a robot has been
deployed to work in environments that are not particularly safe for humans to work in. Examples include locating
shipwrecks, undersea monitoring, bomb disposal, volcano, and space exploration. The existing army of industrial
robots is being supplemented by a variety of (personal) service robots (non-industrial robots). These range from
care robots and companion robots to assist and keep the elderly company, all the way down to simple cleaning
robots such as the Roomba. Due to this extension of functions and the operating environments of service robots,
safety needs have changed and have become more complicated. The environments in which service robots operate
are far less structured than those of industrial robots, and this will have to be reflected in health and safety
standards. Instead of staying away from humans, service robots are meant to be near and even touch humans.
They also need the capability to handle unexpected events and human movement rather than not having to ‘worry’
about humans because they will (be forced to) stay at a safe distance. Regulation relating to non-industrial robots
lags behind regulation for industrial robots, although various specific areas and types of robots are covered by
regulation at the EU level, such as the General Product Safety Directive 2001/95/EC 63 and the Consumer Protection
Directive 1999/44/EC, 64as well as at Member State level. An ISO standard for non-medical care robots is under
development (ISO 13482).
Care Robots
The use of robots in health care for humans is currently at the level of concept studies in real environments, 65but
it may become a usable technology in a few years, and has raised a number of concerns for a dystopian future of
dehumanized care. Current systems include robots that support human carers/caregivers (e.g., in lifting patients,
or transporting material), robots that enable patients to do certain things by themselves (e.g., eat with a robotic
arm), but also robots that are given to patients as company and comfort (e.g., the “Paro” robot seal). 66

There is growing literature on how humans get attached to inanimate objects, such as cars and stuffed toys. Kate
Darling reports a military exercise where a six-legged robot was being used to spot land mines. Each time the robot
stepped on a mine, one of its legs would get blown off, but the robot will continue with its remaining legs. This so
distressed the commander in charge that he is supposed to have termed the exercise “inhumane” and canceled
it.67 There are several ethical issues that are being studied or discussed with respect to the use of these social
robots. One is the “real versus fake” problem. Users of these robots could, over time, lose the ability to distinguish
between what is authentic, and what is not.

Another issue is the ethicality of replacing humans with machines in areas such as elder care, child care, etc. More
studies are required in what are the long term emotional effects of these replacements on humans. Another issue
is manipulation of people through these social robots. If users get attached to their social robots, it becomes
eminently possible for companies and other nefarious organizations to use the robots as a means to control their
human companions (e.g. through advertisements, exploitative maintenance pricing, etc.).

While the development of intelligent assistants may be convenient and help to manage administrative and other
daily tasks, in certain respects the rise of intelligence and autonomy in machines and software tools may decrease
the intelligence and autonomy of the human user. Digital dementia is a phenomenon described by psychologists
as a possible consequence of overusing digital technology, which could result in the deterioration or breakdown of

62
Haber, Eldar, The Internet of Children: Protecting Children’s Privacy in A Hyper-Connected World (November 21, 2020).
2020 U. Ill. L. Rev. 1209 (2020)., Available at SSRN: https://ssrn.com/abstract=3734842
63
https://ec.europa.eu/info/general-product-safety-directive_en
64
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A31999L0044
65
Gary Chan Kok Yew, Trust in and Ethical Design of Carebots: The Case for Ethics of Care, Int J Soc Robot. 2020 May
23 : 1–17. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7245509/
66
Ethics of Artificial Intelligence and Robotics, supra section 2.5.1(a).
67
Darling, Kate, Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent
Behavior Towards Robotic Objects 5 (April 23, 2012). Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016, We
Robot Conference 2012, University of Miami , Available at SSRN: https://ssrn.com/abstract=2044797

9
cognitive abilities. 68Overuse of digital technology may also have an impact on personal autonomy, depending on
the degree of digital assistance that is increasingly relied upon to complete even trivial tasks, such as watering
indoor plants. 69 As a consequence of the growing reliance on digital assistance, basic human capabilities could
be lost.

Sex Robots

Another important and much discussed emerging issue pertains to social robots being used as objects of sex. There
are obviously numerous implications that arise out of this, which include the legalities of social robots as sexual
objects, gender issues, child pornography, gender issues and even sexual healing and therapy. 70

Some of these issues are discussed in a recent report issued by the Foundation for Responsible Robotics. 71

Technologically, robots have been identified as useful resources in all aspect of human endeavor due to the
numerous advantages associated with such system. They reduce labor hardship in several manufacturing processes,
help in dirty operation and dangerous mission to human life, ease several domestic activities and generally improve
efficiency in many engineering processes. However, several ethical and legal issues have been identified in the
literature with respect to applications of such system in various aspects of human activities. One of such area that
is less attended to in the literature is in the application of robot in sex activities. The birth of sex robots has
introduced several dimensions into the concept of sex which implicitly has several ethical and legal implications. 72

There is a market for sex and companionship with robots. 73 Humans have long had deep emotional attachments
to objects, so perhaps companionship with a predictable android is attractive, especially to people who struggle
with actual humans.

Some 74 argue that robots can provide true friendships, and is thus a valuable goal. In these discussions there is
an issue of deception, since a robot cannot (at present) mean what it says, or have feelings for a human. It is well

68
A. Walz, Kay Firth-Butterfield, IMPLEMENTING ETHICS INTO ARTIFICIAL INTELLIGENCE: A
CONTRIBUTION, FROM A LEGAL PERSPECTIVE, TO THE DEVELOPMENT OF AN AI GOVERNANCE REGIME,
n 62 citing MANFRED SPITZER, DIGITALE DEMENZ (2012); Larry Dossey, FOMO,Digital Dementia, and Our
Dangerous Experiment, EXPLORE,Mar./Apr. 2014, at 69, 70–71; Markus Appel & Costanze Schreiner, Digitale Demenz?
Mythen und wissenschaftliche Befundlage zur Auswirkung von Internetnutzung, 65 Psychologische Rundschau 1, 8–10
(2014).
69
For another approach to the evaluation of potential impacts of AI on employment and jobs, see IEEE GLOB.
INITIATIVE ON ETHICS OF AUTONOMOUS & INTELLIGENT SYS., ETHICALLY ALIGNED DESIGN – A
VISION FOR PRIORITIZING HUMAN WELL-BEING WITH AUTONOMOUS AND INTELLIGENT SYSTEMS,
136 (2017), http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf
70
Subramanian, Ramesh (2017) "Emergent AI, Social Robots and the Law: Security, Privacy and Policy Issues," Journal of
International Technology and Information Management: Vol. 26 : Iss. 3 , Article 4. Available at:
http://scholarworks.lib.csusb.edu/jitim/vol26/iss3/4
71
Sharkey, N., van Wynsberghe, A., Robbins, S., & Hancock, E. (2017). Our Sexual Future with Robots. The Hague,
Netherlands. Retrieved from https://responsible-roboticsmyxf6pn3xr.netdna-ssl.com/wp-content/uploads/2017/
72
Amuda, Yusuff Jelili and Tijani, Ismaila B., Ethical and Legal Implications of Sex Robot: An Islamic Perspective
(February 19, 2012). OIDA International Journal of Sustainable Development, Vol. 03, No. 06, pp.19-28, 2012,
Available at SSRN: https://ssrn.com/abstract=2008011
73
Matt Dougherty, Controversy surrounds 'robot sex brothel' set to open in Houston, KHOU 11,
https://www.khou.com/article/news/local/controversy-surrounds-robot-sex-brothel-set-to-open-in-houston/285-
596991498
74
John Danaher, Regulating Child Sex Robots: Restriction or Experimentation?, Medical Law Review, Volume 27, Issue
4, Autumn 2019, Pages 553–575, https://doi.org/10.1093/medlaw/fwz002 (“In July 2014, the roboticist Ronald Arkin
suggested that child sex robots could be used to treat those with paedophilic predilections in the same way that
methadone is used to treat heroin addicts. Taking this onboard, it would seem that there is reason to experiment with the

10
known that humans are prone to attribute feelings and thoughts to entities that behave as if they had sentience,
even to clearly inanimate objects that show no behavior at all.

Finally, there are concerns that have often accompanied matters of sex, namely consent 75 aesthetic concerns, and
the worry that humans may be “corrupted” by certain experiences. Old fashioned though this may seem, human
behavior is influenced by experience, and it is likely that pornography or sex robots support the perception of other
humans as mere objects of desire, or even recipients of abuse, and thus ruin a deeper sexual and erotic experience.
In this vein, the “Campaign Against Sex Robots” (CASR) 76 argues that these devices normalize sex robots as
substitutes for relationships with women. 77

The dire conseqaicial intelligence, autonomous systems, and robotics are digital technologies that impact us all
today,78and will have momentous impact on the development of humanity and transformation of our society in
the very near future.79 Despite many benefits, AI presents risks that need to be managed. Minimizing these risks
will need to find ways to protect the ethical values defined by fundamental rights and basic legal principles, thereby
preserving a human centric society. This working paper explores the many concerns regarding AI and autonomous
systems include job losses, increasing loss of humanity in social relationships, loss of privacy and personal
autonomy, potential information biases and the error proneness, and susceptibility to manipulation. 80
AI ethics is a field that has emerged as a response to the growing concern regarding the impact of artificial
intelligence. It can be read as a subset of the wider field of digital ethics, which addresses concerns raised by the
development and deployment of new digital technologies, such as AI, big data analytics and blockchain
technologies.

The increasing use of AI will have increasingly disruptive revolutionary impacts on society. Despite many benefits,
AI involves serious risks that need to be managed. Minimizing these risks will consider the respective benefits
while at the same time protecting the ethical values defined by fundamental rights and basic constitutional
principles, thereby preserving a human centric society.

This straightfoward definition of ethics put forth by Walz and Firth-Butterfield is easiest to work with, when
discussing ethical applications and design of AI. “Ethics is commonly referred to as the study of morality. Morality...
is a system of rules and values for guiding human conduct, as well as principles for evaluating those rules.
Consequently, ethical behavior does not necessarily mean “good” behavior. Ethical behavior instead indicates

regulation of this technology. But most people seem to disagree with this idea, with legal authorities in both the UK and
US taking steps to outlaw such devices. In this article, I subject these different regulatory attitudes to critical scrutiny. In
doing so, I make three main contributions to the debate. First, I present a framework for thinking about the regulatory
options that we confront when dealing with child sex robots. Secondly, I argue that there is a prima facie case for
restrictive regulation, but that this is contingent on whether Arkin’s hypothesis has a reasonable prospect of being
successfully tested. Thirdly, I argue that Arkin’s hypothesis probably does not have a reasonable prospect of being
successfully tested. Consequently, we should proceed with utmost caution when it comes to this technology.”)
75
L. Frank, Sven Nyholm, Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible,
and desirable? (2017) https://www.semanticscholar.org/paper/Robot-sex-and-consent%3A-Is-consent-to-sex-between-
a-Frank-Nyholm/be0ef535ec3b1a50c502d4d347397342f0eae933
76
https://campaignagainstsexrobots.org/
77
The CASR was founded in 2015 to warn against the dangers of normalising relationships with machines and reinforcing
female dehumananisation. The ideas for the campaign were outlined in the paper ‘The asymmetrical relationship:
parallels between prostitution and the development of sex robots’ published in ACM SIGCAS. Our first campaign
image featured a broken mannequin and represents dissociation from relationship and the instrumentality of women’s
bodies, reduced to parts. The campaign has received international attention and will continue to defend the dignity and
humanity of women and girls. It changed its name in 2021 to the Campaign Against Porn Bots.
78
Kunz, Martina and Ó hÉigeartaigh, Seán, Artificial Intelligence and Robotization. Robin Geiß and Nils Melzer (eds.),
Oxford Handbook on the International Law of Global Security (Oxford University Press, Forthcoming), Available at SSRN:
https://ssrn.com/abstract=3310421
79
Ethics of Artificial Intelligence and Robotics, available at https://plato.stanford.edu/entries/ethics-ai/.
80
Shackelford, Scott J. and Dockery, Rachel, Governing AI (October 30, 2019). Cornell Journal of Law and Policy, 2020,
Available at SSRN: https://ssrn.com/abstract=3478244 or http://dx.doi.org/10.2139/ssrn.3478244

11
compliance with specific values. Such values can be commonly accepted as being part of human nature (e.g., the
protection of human life, freedom, and human dignity) or as a moral expectation characterizing beliefs and
convictions of specific groups of people (e.g., religious rules). Moral expectations may also be of individual nature
(e.g., an entrepreneur’s expectation that employees accept a company’s specific code of conduct). This broad
definition is used here because….the benefit of this neutral definition of ethics is that it enables one to address
the issue of ethical diversity from a regulatory and policymaking perspective.81

Walz and Firth-Butterfield also advocate for the need to conduct in-depth risk-benefit-assessments with regard to
the use of AI and autonomous systems. They emphasize out major concerns in relation to AI and autonomous
systems such as likely job losses, causation of damages, lack of transparency, increasing loss of humanity in social
relationships, loss of privacy and personal autonomy, potential information biases and the error proneness, and
susceptibility to manipulation of AI and autonomous systems. Their critical analysis aims to raise awareness on
the side of policy-makers to sufficiently address these concerns and design an appropriate AI governance regime
with a focus on the preservation of a human-centric society. They emphasize that raising awareness for eventual
risks and concerns should, however, not be misunderstood as an anti-innovative approach. Rather, it is necessary
to consider risks and concerns adequately to ensure that new technologies such as AI and autonomous systems
are constructed and operate in a way which is acceptable for individual users and society as a whole.82

To this end, they developed a graded governance model for the implementation of ethical concerns in AI systems
reflecting the often-misjudged fact that there is a variety of policy-making instruments which policy-makers can
utilize. They point out that ethical concerns do not only need to be addressed by legislation or international
conventions. Depending on the ethical concern at hand, alternative regulatory measures such as technical
standardization or certification may even be preferable. To illustrate the practical impact of this graded governance
model for the implementation of ethical concerns in AI systems, two concrete global approaches are presented
herein, in addition, which regulators, governments and industry could refer to as a basis for regulating ethical
concerns associated with the use of AI and autonomous systems.83

Europe and the US private and public sectors are actively engaged in taking a lead in this space. 84 Recent
regulatory and policy developments reflect a global tipping point toward serious regulation of artificial intelligence
in the U.S. and European Union (“EU”), which will have far-reaching consequences for technology companies,
government agencies, and the public.

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of
various ethical issues. Artificial Intelligence adds a new dimension to these issues. Systems that use artificial
intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can
perform, their potential impact on the world and the diminishing ability of humans to understand, predict and
control their functioning. This generates several ethical and legal difficulties for all of us.

Thus the rapid growth of AI systems has implications on a wide variety of fields. It can prove to be a boon to
disparate fields such as healthcare, education, global logistics and transportation, to name a few. However, these
systems will also bring forth far reaching changes in employment, economy and security. As AI systems gain
acceptance and become more commonplace, certain critical questions arise: What are the legal and
security ramifications of the use of these new technologies? Who can use them, and under what circumstances?
What is the safety of these systems? Should their commercialization be regulated? What are the privacy issues
associated with the use of these technologies? What are the ethical considerations? Who has responsibility for the

81
Axel Walz & Kay Firth-Butterfield, IMPLEMENTING ETHICS INTO ARTIFICIAL INTELLIGENCE: A
CONTRIBUTION, FROM A LEGAL PERSPECTIVE, TO THE DEVELOPMENT OF AN AI GOVERNANCE REGIME,
18 Duke Law & Technology Review 184 (2019)
https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1352&context=dltr
82
Id.
83
Id.
84
See, e.g, Gibson, Dunn, Artificial Intelligence and Automated Systems Legal Update (1Q21), available at
https://www.gibsondunn.com/artificial-intelligence-and-automated-systems-legal-update-1q21/ (June 15, 2021). See also
Stanford HAI 2019 Fall Conference - AI Global Governance (CS+Social Good) Nov 13, 2019 available at
https://www.youtube.com/watch?v=NGwE0FaxyIU&list=LL&index=92

12
large amounts of data that is collected and manipulated by these systems? Could these systems fail? What is the
recourse if there is a system failure? These questions are but a small subset of possible questions in this key
emerging field.

As the regulation of AI is still in its infancy, guidelines, ethics codes, and actions by and statements from
governments and their agencies address various legal issues, including data protection and privacy, transparency,
human oversight, surveillance, public administration and services, autonomous vehicles and lethal autonomous
weapons systems.85
The notion of “artificial intelligence” is understood broadly as any kind of artificial computational system that
shows intelligent behaviour, i.e., complex behaviour that is conducive to reaching goals. The project of AI is to
create machines that have a feature central to how we humans see ourselves, namely as feeling, thinking,
intelligent beings. The main purposes of an artificially intelligent agent generally involve sensing, modelling,
planning and action, but current AI applications also include perception, text analysis, natural language processing
(NLP), logical reasoning, game-playing, decision support systems, data analytics, predictive analytics, as well as
autonomous vehicles and other forms of robotics 86 AI may involve any number of computational techniques to
achieve these aims, be that classical symbol-manipulating AI, inspired by natural cognition, or machine learning
via neural networks.
Neural networks are computing systems with interconnected nodes that work much like neurons in the human
brain. Using algorithms, they can recognize hidden patterns and correlations in raw data, cluster and classify it,
and – over time – continuously learn and improve. In mathematics and computer science, an algorithm is a finite
sequence of well-defined, computer-implementable instructions, typically to solve a class of specific problems or
to perform a computation. Algorithms are rules used as specifications for performing calculations, data processing,
automated reasoning, and other tasks.
Neural networks can learn and model the relationships between inputs and outputs that are nonlinear and complex;
make generalizations and inferences; reveal hidden relationships, patterns and predictions; and model highly
volatile data (such as financial time series data) and variances needed to predict rare events (such as fraud
detection). As a result, neural networks can improve decision processes in areas such as:
• Credit card and Medicare fraud detection.
• Optimization of logistics for transportation networks.
• Character and voice recognition, also known as natural language processing.
• Medical and disease diagnosis.
• Targeted marketing.
• Financial predictions for stock prices, currency, options, futures, bankruptcy and bond ratings.
• Robotic control systems.
• Electrical load and energy demand forecasting.
• Process and quality control.
• Chemical compound identification.
• Ecosystem evaluation.

85
Regulation of Artificial Intelligence in Selected Jurisdictions, The Law Library of Congress, Global Legal Research
Directorate, (January 2019). https://www.loc.gov/law/help/artificial-intelligence/regulation-artificial-intelligence.pdf
86
Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram
Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie
Shah, Milind Tambe, and Astro Teller, 2016, “Artificial Intelligence and Life in 2030”, One Hundred Year Study on
Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016.
https://ai100.stanford.edu/sites/g/files/sbiybj9861/f/ai100report10032016fnl_singles.pdf

13
• Computer vision to interpret raw photos and videos (for example, in medical imaging and robotics and
facial recognition).
A great deal of attention is being paid to strong AI precisely because advancements in “machine learning.” If
machines can be programmed to learn new information without ongoing human intervention, then, if left
unchecked, they can theoretically evolve beyond their originally intended uses. New applications of artificial
intelligence technology are coming to the public’s attention on a weekly basis. Algorithms that automate decision-
making are being built into a wide variety of machines in almost every sector of commercial interest. 87 Among
these applications are self-driving cars, 88 investment and finance,89 insurance services, 90 medical research and
diagnosis,91 civilian and military drones, 92 tools to conduct complex legal research, 93 home services,94 programs

87
Gaon, Aviv and Stedman, Ian, A Call to Action: Moving Forward with the Governance of Artificial Intelligence
in Canada (February 4, 2019). Alberta Law Review, Vol. 56, No. 4, 2019, Available at SSRN:
https://ssrn.com/abstract=3328588
88
See generally Peter Stone et al, Artificial Intelligence and Life in 2030 - One Hundred Year Study on Artificial
Intelligence: Report of the 2015-2016 Study Panel (Stanford: Stanford University Press, 2016),
http://ai100.stanford.edu/2016-report [100 Year Study]; Noah Zon & Sara Ditta, Robot, take the wheel:
Public policy for automated vehicles (Toronto: The Mowat Centre at the University of Toronto, 2016); U.S.
Department of Transportation and the National Highway Traffic Safety Administration, Automated Driving
Systems 2.0 A Vision for Safety (September 2017),
https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-ads2.0_090617_v9a_ tag.pdf.
89
See e.g. Matt Turner “Machine learning is now used in Wall Street dealmaking, and bankers should probably
be worried” Business Insider (4 April 2017), http://www.businessinsider.com/jpmorgan-using-machine-
learning-in-investmentbanking-2017-4; Martin Arnold & Laura Noonan, “Robots enter investment banks’
trading floors” Financial Times (6 July 2017), https://www.ft.com/content/ da7e3ec2-6246-11e7-8814-
0ac7eb84e5f1.
90
See e.g. PricewaterhouseCoopers, AI in Insurance: Hype or reality? (March 2016),
https://www.pwc.com/us/en/insurance/publications/assets/pwc-top-issues-artificialintelligence.pdf According
to PWC’s report, AI is expected to improve efficiencies via the greater automation of existing underwriting
and claims processes).
91
See e.g. Daniel Akst, “Computers Turn Medical Sleuths and Identify Skin Cancer”, The Wall Street Journal (10
February 2017), https://www.wsj.com/articles/computersturn-medical -sleuths-and-identify-skin-cancer-
1486740634; Alice Yan, “Dentists in China Successfully Used a Robot to Perform Implant Surgery Without
Human Intervention” Business Insider (1 September 2017), http://www.businessinsider.com/dentists-inchina-
used-a-robot-to-perform-implant-surgery-2017-9.
92
See e.g. Jacques Bughin et al, McKinsey Global Institute, Artificial Intelligence: The Next Frontier (June
2017), https://www.projectfinance.pl/pluginfile.php/109/mod_forum/attachment/487/MGI-Artificial-
Intelligence-Discussion-paper.pdf (the report considers the impact that civilian drones will have on efficient
product delivery, and also considers the implications of drones taking on time consuming and dangerous
jobs, such as inspecting turbines and aircrafts) at 29, 49, 56; Bonnie Docherty, Human Rights Watch, “Losing
Humanity: The Case against Killer Robots” (19 November 2012),
online:https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots; Mary Ellen
O’Connell, “Banning Autonomous Killing: The Legal and Ethical Requirement That Humans Make Near-Time
Lethal Decisions” in Matthew Evangelista and Henry Shue, eds, The American Way of Bombing: Changing
Ethical and Legal Norms, From Flying Fortresses to Drones (Ithaca, NY: Cornell University Press, 2014) 223 at
224-34.
93
See e.g. Frank Pasquale & Glyn Cashwell, “Four Futures of Legal Automation” (2015) 63 UCLA LR 26;
Benjamin Alarie, Anthony Niblett & Albert Yoon, “How Artificial Intelligence Will Affect the Practice of Law”
(2017) 68:1 UTLJ 106 (the authors argue that AI use in the legal profession will improve access to justice
and improve efficiency and transparency. The find the long-term impacts difficult to predict, however).
94
See 100 Year Study, supra at 24 (which explores the idea that special purpose robots will deliver packages,
clean offices, and enhance home security).

14
that translate language,95 and programs that assist with making human resource decisions. 96 Interesting uses for
AI are also emerging in the fields of environmental and social justice, including helping to increase food
production97 and assisting people who are confronted with physical and intellectual barriers. 98 The potential uses
of AI seem endless at this time.
AI is Unstoppable
Thus, the commercial advancement of AI technology has not been limited in its geographic reach. While the United
States is home to Silicon Valley and the headquarters for tech giants like Apple, Google, HP, Netflix, and Tesla,
among others, interest in AI has taken hold across the globe. The People’s Republic of China announced in July
2017 after Alpha Go99 defeated World Go Champion that it plans to “upgrade its economy with AI as a main driving

95
See e.g. Adam Mickiewicz, Marcin Junczys-Dowmunt & Bruno Pouliquen, “The United Nations Parallel Corpus
v1.0” (2016), <https://conferences.unite.un.org/UNCorpus/Content/Doc/un.pdf> (this paper was posted
online alongside the release of the United Nations’ Parallel Corpus. The Parallel Corpus is a collection of the
UN’s official records (in all six official languages of the UN) that cover the 25 years from 1990 to 2014 and
are already in the public domain. The goal of the project described in the paper is to translate the Corpus
into as many different languages as possible. Allowing access to the corpus will allow the development
multilingual resources, encourage research, and encourage progress in various natural language processing
tasks, such as machine translation).
96
See e.g. Stefan Strohmeier & Franca Piazza, “Artificial Intelligence Techniques in Human Resource
Management – A Conceptual Exploration” in Cengiz Kahraman & Sezi Cevik Onar, eds, Intelligent Techniques
in Engineering Management: Theory and Applications (New York: Springer International Publishing, 2015).
97
See e.g. Matt Aitkenhead et al, “Weed and crop discrimination using image analysis and artificial intelligence
method” (2003) 39:3 Computers and Electronics in Agriculture 157; World Economic Forum, Shaping the
Future of Global Food Systems: A Scenarios Analysis (January 2017),
http://www3.weforum.org/docs/IP/2016/NVA/WEF_FSA_FutureofGlobalFoodSystems.pdf at 22 (the report
argues that technology, including AI, “can create significant new value through innovations for food
systems”); Rob Trice, “Can Artificial Intelligence Help Feed The World?” Forbes (5 September 2017),
online:https://www.forbes.com/sites/themixingbowl/2017/09/05/can-artificial-intelligence-helpfeed-the-
world/#3f99736846db (the author discusses several ways that AI can improve agriculture, such as by
automating harvesting and enabling image recognition in order to better detect pests that could negatively
impact crop yield).
98
See e.g. Simon D'Alfonso et al, “Artificial Intelligence-Assisted Online Social Therapy for Youth Mental Health”
(2017) 8:796 Frontiers in Psychology, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5454064; Tom
Simonite, “Machine Learning Opens Up New Ways to Help People with Disabilities” Technology Review (23
March 2017),https://www.technologyreview.com/s/603899/machine-learning-opens-up-newways-to-help-
disabled-people
99
AlphaGo is an AI program designed by UK company, DeepMind, to play the intuitive board game called Go.
The significant characteristic of the board game is that the ‘number of possible configurations of the board
are more than the number of atoms in the universe.’ AlphaGo implemented a type of AI, reinforcement
learning, to successfully account for this large number of possible configurations and teach itself to play the
game. AlphaGo became so advanced that it beat 18-Time World Champion Go Player, Lee Sedol, four games
to one. Within the reinforcement learning is the use of neural networks. The neural networks ‘mimic the
function of human brains by absorbing and distributing their information processing capacity to groups of
receptors that function like neurons; they find and create connections and similarities within the data they
process.’The use of neural networks allows AI to learn from the input data and teach itself, rather than being
supervised by parameters from the already labelled input data. Therefore, reinforcement learning goes
beyond machine learning as it can make decisions. Reinforcement learning is an aspect of technology that
will warrant even further advancements in AI, and see AI enter ‘the realm of learning about and executing
actions in the real world.’ White, Courtney and Matulionyte, Rita, Artificial Intelligence Painting The Bigger
Picture For Copyright Ownership (December 5, 2019). Available at SSRN: https://ssrn.com/abstract=3498673
or http://dx.doi.org/10.2139/ssrn.3498673

15
force.” 100 Russian President Vladmir Putin made his country’s interest in AI clear when he announced that he
believes “[a]rtificial intelligence is the future, not only for Russia, but for all humankind … Whoever becomes the
leader in this sphere will become the ruler of the world.” 101Evidence suggests that interest in AI is also high in
Japan,102 the United Kingdom,103Singapore,104Germany, Israel and India. 105
Europe and the US private and public sectors are actively engaged in taking a lead in this space. 106 Recent
regulatory and policy developments reflect a global tipping point toward serious regulation of artificial intelligence
in the U.S. and European Union (“EU”), which will have far-reaching consequences for technology companies,
government agencies, and the public.107

100
The State Council, The People’s Republic of China, Press Release, “China issues guideline on artificial
intelligence development” (20 July 2017),
http://english.gov.cn/policies/latest_releases/2017/07/20/content_281475742458322.htm.
101
“’Whoever leads in AI will rule the world’: Putin to children on Knowledge Day” Russia Today (1
September 2017), https://on.rt.com/8lz7
102
Japan, Prime Minister’s Office, Meeting of the Headquarters for Japan’s Economic Revitalization, New Robot
Strategy: Vision, Strategy, Action Plan (10 February 2015) (Japan has outlined a new robotic strategy
indicating the need to develop international standards and security measures); See also Japan, Ministry of
Economy, Trade and Industry, Press Release, Robotics Policy Office is to be Established in METI (1 July 2015),
online:http://www.meti.go.jp/english/press/2015/0701_01.html (explaining that a Robotic Policy Office is to
be established under the Ministry of Economy, Trade and Industry).
103
In February 2020, the UK Government’s Committee on Standards in Public Life published a report on
“Artificial Intelligence and Public Standards,” addressing the deployment of AI in the public sector. Although
it also did not favor the creation of a specific AI regulator, it described the new Centre for Data Ethics and
Innovation (“CDEI”) as a “regulatory assurance” body with a cross-cutting role, and went on to identify an
urgent need for guidance and regulation on the issues of transparency and data bias, in particular.
Committee on Standards in Public Life, Artificial Intelligence and Public Standards: report (Feb. 10, 2020),
available at https://www.gov.uk/government/publications/artificial-intelligence-and-public-standards-report.
In June 2020, CDEI published its “AI Barometer,” a risk-based analysis which reviews five key sectors
(criminal justice, health and social care, financial services, energy and utilities and digital and social media)
and identifies opportunities, risks, barriers and potential regulatory gaps. Centre for Data Ethics and
Innovation, AI Barometer Report (June 2020), available at
https://www.gov.uk/government/publications/cdei-ai-barometer/cdei-ai-barometer. The UK also participated
in the drafting of the Council of Europe’s Feasibility Study on Developing a Legal Instrument for Ethical AI. In
December 2020, the House of Lords’ Liaison Committee (“Committee”) published a report “AI in the UK: No
Room for Complacency” (the “2020 Report”), a follow up on the 2018 Report by the House of Lords’ Select
Committee (the “2018 Report”). House of Lords Liaison Committee, AI in the UK: No Room for Complacency,
7th Rep. of Session 2019-21 (Dec. 18, 2020), available at
https://publications.parliament.uk/pa/ld5801/ldselect/ldliaison/196/196.pdf.
104
Singapore, Prime Minister’s Office, National Research Foundation, Press Release (3 May 2017),
https://www.nrf.gov.sg/Data/PressRelease/Files/201705031442082191-
Press%20Release%20(AI.SG)%20(FINAL)%20-web.pdf (the Singapore National Research Foundation will
invest up to 150 million Singaporean dollars over the next five years into a new national program to enhance
AI research in Singapore).
105
Regulation of Artificial Intelligence in Selected Jurisdictions, The Law Library of Congress, Global Legal
Research Directorate, (January 2019). https://www.loc.gov/law/help/artificial-intelligence/regulation-artificial-
intelligence.pdf
106
On April 21, 2021, the European Commission (“EC”) presented its much anticipated comprehensive draft of an AI
Regulation (also referred to as the “Artificial Intelligence Act”). EC, Proposal for a Regulation of the European Parliament
and of the Council laying down Harmonised Rules on Artificial Intelligence and amending certain Union Legislative Acts
(Artificial Intelligence Act), COM(2021) 206 (April 21, 2021), available at
107
See The Law Library of Congress, Global Legal Research Directorate, Regulation of Artificial Intelligence in
Selected Jurisdictions (report examines the emerging regulatory and policy landscape surrounding artificial
intelligence (AI) in jurisdictions around the world and in the European Union (EU). In addition, a survey of
international organizations describes the approach that United Nations (UN) agencies and regional
organizations have taken towards AI. As the regulation of AI is still in its infancy, guidelines, ethics codes,

16
Many countries have already begun to assemble advisory groups and committees in order to investigate and report
to government about the potential of artificial intelligence. The European Parliament’s Committee on Legal Affairs
has published a “Report with Recommendations to the Commission on Civil Law Rules on Robotics.” 108 These
studies have all recognized that governments will need to gain expertise in artificial intelligence in order to attend
properly to the needs of their nations’ commercial industries and private citizens.
U.S. federal government activity addressing AI accelerated during the 115th and 116th Congresses. President
Donald Trump issued two executive orders, establishing the American AI Initiative (E.O. 13859) and promoting
the use of trustworthy AI in the federal government (E.O. 13960).
Federal committees, working groups, and other entities have been formed to coordinate agency activities, help
set priorities, and produce national strategic plans and reports, including an updated National AI Research and
Development Strategic Plan and a Plan for Federal Engagement in Developing Technical Standards and Related
Tools in AI.
In Congress, committees held numerous hearings, and Members introduced a wide variety of legislation to address
federal AI investments and their coordination; AI-related issues such as algorithmic bias and workforce impacts;
and AI technologies such as facial recognition and deepfakes.
At least four laws enacted in the 116th Congress focused on AI or included AI-focused provisions:
• The National Defense Authorization Act for FY2021 (P.L. 116-283) included provisions addressing various
defense- and security-related AI activities, as well as the expansive National Artificial Intelligence Initiative
Act of 2020 (Division E).
• The Consolidated Appropriations Act, 2021 (P.L. 116-260) included the AI in Government Act of 2020
(Division U, Title I), which directed the General Services Administration to create an AI Center of
Excellence to facilitate the adoption of AI technologies in the federal government.
• The Identifying Outputs of Generative Adversarial Networks (IOGAN) Act (P.L. 116-258) supported
research on Generative Adversarial Networks (GANs), the primary technology used to create deepfakes.
• P.L. 116-94 established a financial program related to exports in AI among other areas.109
AI Ethics Emerges as Response to Growing Concern Over the Impact if AI
Key AI issues that governments will be prioritizing, include the risk posed by weaponized AI, concerns about data
and privacy, responding to the inevitability of job loss, and the need to update intellectual property laws to support
the growth of the country’s AI industry. 110
AI ethics is a field that has emerged as a response to the growing concern regarding the impact of artificial
intelligence. It can be read as a subset of the wider field of digital ethics, which addresses concerns raised by the
development and deployment of new digital technologies, such as AI, big data analytics and blockchain
technologies.

and actions by and statements from governments and their agencies on AI are also addressed. While the
country surveys look at various legal issues, including data protection and privacy, transparency, human
oversight, surveillance, public administration and services, autonomous vehicles, and lethal autonomous
weapons systems, the most advanced regulations were found in the area of autonomous vehicles, in
particular for the testing of such vehicles. Available at https://www.loc.gov/law/help/artificial-
intelligence/regulation-artificial-intelligence.pdf
108
European Parliament, Committee on Legal Affairs, Report with recommendations to the Commissioner on
Civil Law Rules on Robotics (27 January 2017) [European Robotics].
109
Congressional Research Service, Artificial Intelligence: Background, Selected Issues, and Policy Considerations, at
Introduction (May 19, 2021) https://crsreports.congress.gov/product/pdf/R/R46795
110
Gaon, Aviv and Stedman, Ian, A Call to Action: Moving Forward With the Governance of Artificial
Intelligence in Canada (February 4, 2019). Alberta Law Review, Vol. 56, No. 4, 2019, Available at SSRN:
https://ssrn.com/abstract=3328588

17
Artificial intelligence adds a new dimension to ethical issues. Systems that use artificial intelligence technologies
are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential
impact on the world and the diminishing ability of humans to understand, predict, and control their functioning.
This generates several ethical and legal difficulties for all of us.

There is mounting public concern over the influence that artificial intelligence and high technology companies
have in our society. 111 One concern after another has plagued them, from being a conduit in influencing elections
to the development of weaponized artificial intelligence, high tech companies are under scrutiny, and there is a
growing need for a check on the largely unregulated artificial intelligence sector.

Employees are exercising greater muscle when they see their employer taking on a contract they perceive as
involving morally questionable AI work, and they are taking action. Companies are leveraging data and artificial
intelligence to create scalable solutions, and they are also scaling their reputational, regulatory, and legal risks.
For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather
app. 112 Optum is being investigated by regulators for creating an algorithm that allegedly recommended that
doctors and nurses pay more attention to white patients than to sicker black patients. 113 Goldman Sachs is being
investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger
credit limits to men than women on their Apple cards. 114 Facebook infamously granted Cambridge Analytica, a
political firm, access to the personal data of more than 50 million users. 115 Arizona Attorney General Mark Brnovich
sued Google alleging the tech giant violated its users’ privacy by collecting information about their whereabouts
even if they had turned off such digital tracking. 116

At Google, employees recently protested their own company’s bid to develop a search engine for China that
censors results based on terms blacklisted by the Chinese government. 117

Due to the highly specialized skills needed to develop the AI sector, employees see that they have power, too. 118
This has serious implications, especially for national security, when private sector employees refuse to work on
military or government contracts with their employer. However, this state of affairs has led to a pronounced private
and public sector commitment to AI ethics.

Google stepped away from bidding for a cloud-computing contract with the Pentagon worth $10 billion after
employees asked for assurances that it wouldn’t develop warfare technology. Existing employees at a number of
companies have recently agitated for their employers to attend more

111
Deeks, Ashley, Facebook Unbound? (February 25, 2019). Virginia Law Review, Vol. 105, 2019, Virginia
Public Law and Legal Theory Research Paper No. 2019-08, Available at SSRN:
https://ssrn.com/abstract=3341590; See also Ann M. Lipton, Not Everything Is About Investors: The Case for
Mandatory Stakeholder Disclosure, 37 Yale J. on Reg. (2020)
https://digitalcommons.law.yale.edu/yjreg/vol37/iss2/3
112
Sam Dean, L.A. is suing IBM for illegally gathering and selling user data through its Weather Channel app, (Jan. 4, 2019)
https://www.latimes.com/business/technology/la-fi-tn-city-attorney-weather-app-20190104-story.html
113
Melanie Evans and Anna Wilde Mathews, New York Regulator Probes UnitedHealth Algorithm for Racial Bias, (Oct. 26,
2019) https://www.wsj.com/articles/new-york-regulator-probes-unitedhealth-algorithm-for-racial-bias-11572087601
114
Taylor Telford, Apple Card algorithm sparks gender bias allegations against Goldman Sachs, (Nov. 11, 2019)
https://www.washingtonpost.com/business/2019/11/11/apple-card-algorithm-sparks-gender-bias-allegations-against-
goldman-sachs/
115
PAOLO ZIALCITA, Facebook Pays $643,000 Fine For Role In Cambridge Analytica Scandal, (Oct. 30, 2019)
https://www.npr.org/2019/10/30/774749376/facebook-pays-643-000-fine-for-role-in-cambridge-analytica-scandal
116
Tony Romm, Arizona sues Google over allegations it illegally tracked Android smartphone users’ locations, (May 27,
2020) https://www.washingtonpost.com/technology/2020/05/27/google-android-privacy-lawsuit/
117
See infra.
118
Can Employees Change the Ethics of Tech Firms, KNOWLEDGE@WHARTON (Nov. 13, 2018),
http://knowledge.wharton.upenn.edu/article/can-tech-employees-change-theethics-of-their-firms/
[https://perma.cc/ZR8N-WN7V]

18
carefully to the social implications of their work. At Google, Microsoft, and Amazon, employees have revolted
against projects that raise ethical concerns. 119 Demonstrating the power of employee protest to reach beyond
consumer-facing industries, employees forced the consulting firm, McKinsey, to
cancel a contract with the US Department of Homeland Security’s Immigration and Customs Enforcement (ICE).
120
Wayfair employees staged a walkout over the company’s furniture sales to Texas immigrant detention
centers.121

Employees demanding greater ethical behavior from their employers is not confined to tech companies alone,
although, as will be argued below, tech employees seem to have more clout, due to their valued expertise.
McDonald’s workers held a walkout in September 2018, to raise awareness about sexual harassment claims at
the company. In February 2020, film producer Harvey Weinstein, founder of Miramax and The Weinstein Company,
was sentenced to 23 years in prison for sex crimes. The charges against the Hollywood mogul, which began in
2017, caused a trend among victims of sexual misconduct to tell their stories. Since then, allegations of sexual
misconduct have spread throughout the business and entertainment community. With the rise of the #MeToo
movement, sexual misconduct of senior executives has come to light more often, causing firings, corporate
scandals, and even serious damage to the stock price for the firms that employ them. McDonald’s corporation
fired former chief executive (CEO) Steve Easterbrook for failing to disclose a consensual sexual relationship with
a subordinate, which caused a 2.72% drop in stock price on the date of announcement. Although he was formally
not accused of sexual harassment, the relationship violated company policy and exposed McDonald’s toxic culture
of systematic sexual harassment of employees worldwide. 122 In September 2018, after 10 employees filed sexual
harassment complaints with the U.S. Equal Employment Opportunity Commission (EEOC), McDonald’s employees
organized the first ever multi-state strike against the company’s existing sexual harassment policies. 123

119
Kate Conger & Cade Metz, Tech Workers Now Want to Know: What Are We Building This For?, N.Y. TIMES
(Oct. 7, 2018), https://www.nytimes.com/2018/10/07/technology/tech-workersask-censorship-
surveillance.html [https://perma.cc/6W2K-UVFL]; Kelsey Gee, The New Labor Movement: Pushing Employers
to Be Socially Active, WALL ST. J. (June 25, 2019), https://www.wsj.com/articles/the-new-labor-movement-
pushing-employers-to-be-socially-active11561476199 [https://perma.cc/B8NR-A6BP]. Employees also
persuaded Microsoft to suspend and review its political donation program. See Billy Nauman et al., BlackRock
Under Fire, (Dual) Class War, PAC Attack, FIN. TIMES (Aug. 7, 2019), https://www.ft.com/content/4ad02fa4-
b895-11e9-8a88-aa6628ac896c [https://perma.cc/47ZR-Q75D].
120
Michael Forsythe & Walt Bogdanich, McKinsey Ends Work With ICE Amid Furor Over Immigration Policy, N.Y.
TIMES (July 9, 2018), https://www.nytimes.com/2018/07/09/business/mckinsey-ends-ice-contract.html
[https://perma.cc/46CC-FQLF]. Deloitte and GitHub employees have also urged that their employers end
relationships with ICE. Gee, supra note 85; Nitasha Tiku, Employees ask GitHub to Cancel ICE Contract: ‘We
Cannot Offset Human Lives with Money,’ WASH. POST (Oct. 9, 2019),
https://www.washingtonpost.com/technology/2019/10/09/employees-ask-github-cancel-ice-contract-
wecannot-offset-human-lives-with-money [https://perma.cc/DD9A-UDCX].
121
Abha Bhattarai, ‘A Cage Is Not a Home’: Hundreds of Wayfair Employees Walk Out to Protest Sales to Migrant
Detention Center, WASH. POST (June 26, 2019),
https://www.washingtonpost.com/business/2019/06/26/cage-is-not-home-hundreds-wayfair-employeeswalk-
out-protest-sales-migrant-detention-center [https://perma.cc/2TNM-Z7AM]. Slack engineers agreed not to
build tools that can be used to target immigrants. Cameron Bird et al., The Tech Revolt, CALIF. SUNDAY MAG.
(Jan. 23, 2019), https://story.californiasunday.com/tech-revolt[https://perma.cc/B94U-7D2Y].
122
Mooibroek, Robert and Verschoor, Willem F. C., Stock Market Response to CEO Sexual Misconduct: Evidence
from the #MeToo Era (January 22, 2021). Available at SSRN: https://ssrn.com/abstract=3771575 or
http://dx.doi.org/10.2139/ssrn.3771575
123
Daniella Silva, McDonald’s Workers Go on Strike over Sexual Harassment, NBC NEWS (Sep. 18, 2018),
https://www.nbcnews.com/news/us-news/mcdonald-s-workers-go-strike-over-sexual-harassment-n910656
[https://perma.cc/9QZ8-KFKW]. See also Rachel Abrams, McDonald’s Workers Across the U.S. Stage #MeToo
Protests, N.Y. TIMES (Sep. 18, 2018), https://www.nytimes.com/2018/09/18/business/mcdonalds-strike-
metoo.html [http s://perma.cc/GK64-N7UM]; Sarah Whitten, McDonald’s Employees Stage First #MeToo
Strike in Chicago, Alleging Sexual Harassment, USA TODAY (Sep. 18, 2018),
https://www.usatoday.com/story/money/food/2018/09/18/mcdonal

19
The #MeToo movement erupted against this shifting dynamic. Thus, it is not surprising that employees are
leveraging their growing voice to address unethical behaviors by senior executives at iconic companies like Google,
McDonald’s, Uber, Amazon, and Nike. 124

But certain characteristics unique to the tech sector are making it the locus for tension between corporate values
and profits. Given the competition for a limited supply of people with skills in key areas like machine learning, tech
employees can vote with their feet. “So they are increasingly making their voices heard, and we see more and
more examples of activism of this type. Essentially, what it means is that another extremely important factor that
tech managers now have to consider is how the ethical and moral implications of their choices affect their ability
to attract and retain talent.”125

Legal and Ethical Issues of Artificial Intelligence and Robotics

The rapid growth of AI systems has implications on a wide variety of fields. It can prove to be a boon to disparate
fields such as healthcare, education, global logistics and transportation. However, these systems will also bring
forth far reaching changes in employment, economy and security. 126As AI systems gain acceptance and become
more commonplace, certain critical questions arise: What are the legal and security ramifications of the use of
these new technologies? Who can use them, and under what circumstances? What is the safety of these systems?
Should their commercialization be regulated? What are the privacy issues associated with the use of these
technologies? What are the ethical considerations? Who has responsibility for the large amounts of data that is
collected and manipulated by these systems? Could these systems fail? What is the recourse if there is a system
failure? These questions are but a small subset of possible questions in this key emerging field.

As the regulation of AI is still in its infancy, guidelines, ethics codes, and actions by and statements from
governments and their agencies address various legal issues, including data protection and privacy, transparency,
human oversight, surveillance, public administration and services, autonomous vehicles and lethal autonomous
weapons systems.127
US National Security Commission on Artificial Intelligence Final Report
The National Defense Authorization Act of 2019 created a 15-member National Security Commission on Artificial
Intelligence (“NSCAI”), and directed that the NSCAI “review and advise on the competitiveness of the United States
in artificial intelligence, machine learning, and other associated technologies, including matters related to national
security, defense, public-private partnerships, and investments.” 128 Over the past two years, NSCAI has issued
multiple reports, including interim reports in November 2019 and October 2020, two additional quarterly
memorandums, and a series of special reports in response to the COVID-19 pandemic. 129
On March 1, 2021, the NSCAI submitted its Final Report to Congress and to the President. At the outset, the report
makes an urgent call to action, warning that the U.S. government is presently not sufficiently organized or
resourced to compete successfully with other nations with respect to emerging technologies, nor prepared to

124
See Tom C.W. Lin, Incorporating Social Activism, 98 B.U. L. REV. 1535, 1535, 1546-47 (2018)
(“Corporations . . . are at the forefront of some of the most contentious and important social issues of our
time.”).
125
https://knowledge.wharton.upenn.edu/article/can-tech-employees-change-the-ethics-of-their-firms/
126
Martin Ford, Architects of Intelligence: The Truth about AI from the People Building it, (2018) ( consists of
conversations with the most prominent research scientists and entrepreneurs working in the field of artificial
intelligence, including Demis Hassabis, Geoffrey Hinton, Ray Kurzweil, Yann LeCun, Yoshua Bengio, Nick
Bostrom, Fei-Fei Li, Rodney Brooks, Andrew Ng, Stuart J. Russell and many others. The conversations
recorded in the book delve into the future of artificial intelligence, the path to human-level AI (or artificial
general intelligence), and the risks associated with progress in AI.)
127
Regulation of Artificial Intelligence in Selected Jurisdictions, The Law Library of Congress, Global Legal Research
Directorate, (January 2019). https://www.loc.gov/law/help/artificial-intelligence/regulation-artificial-intelligence.pdf
128
H.R. 5515, 115th Congress (2017-18).
129
The National Security Commission on Artificial Intelligence, Previous Reports, available at
https://www.nscai.gov/previous-reports/.

20
defend against AI-enabled threats or to rapidly adopt AI applications for national security purposes. Against that
backdrop, the report outlines a strategy to get the United States “AI-ready” by 2025. 130 The Commission concludes:
The United States should invest what it takes to maintain its innovation leadership, to responsibly use AI to defend
free people and free societies, and to advance the frontiers of science for the benefit of all humanity. AI is going
to reorganize the world.
The more than 700-page report consists of two parts: Part I, “Defending America in the AI Era,” makes
recommendations on how the U.S. government can responsibly develop and use AI technologies to address
emerging national security threats, focusing on AI in warfare and the use of autonomous weapons, AI in intelligence
gathering, and “upholding democratic values in AI.” The report’s recommendations identify specific steps to
improve public transparency and protect privacy, civil liberties and civil rights when the government is deploying
AI systems. NSCAI specifically endorses the use of tools to improve transparency and explainability: AI risk and
impact assessments; audits and testing of AI systems; and mechanisms for providing due process and redress to
individuals adversely affected by AI systems used in government. The report also recommends establishing
governance and oversight policies for AI development, which should include “auditing and reporting requirements,”
a review system for “high-risk” AI systems, and an appeals process for those affected.
Part II, “Winning the Technology Competition,” outlines urgent actions the government must take to promote AI
innovation to improve national competitiveness, secure talent, and protect critical U.S. advantages, including IP
rights. The report highlights how stringent patent eligibility requirements in U.S. courts, particularly with respect
to computer-implemented and biotech-related inventions, and a lack of explicit legal protections for data have
created uncertainty in IP protection for AI innovations, discouraging the pursuit of AI inventions and hindering
innovation and collaboration. NSCAI also notes that China’s significant number of patent application filings have
created a vast reservoir of “prior art” and caused the USPTO’s patent examination process increasingly difficult. As
such, the report recommends that the President issue an executive order to recognize IP as a national priority, and
develop a comprehensive plan to reform IP policies to incentivize and protect AI and other emerging technologies.
131

The NSCAI report may provide opportunity for legislative reform, which would spur investments in AI technologies
and accelerate government adoption of AI technologies in national security. The report’s recommendations with
respect to transparency and explainability132 may also have significant implications for potential oversight and
regulation of AI in the private sector.
Governing AI is Hard, and Relies on Public Pressure
Importantly, this means that we're not only concerned with what happens after technology is built

130
NSCAI, The Final Report (March 1, 2021), available at https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-
Digital-1.pdf.
131
Some of these concerns echo prior actions by the USPTO. For example, the USPTO issued the 2019 Revised Patent-
Eligibility Guidance, which reportedly resulted in a 44% decrease in uncertainty of patent examination subject matter.
However, the guidance has not been broadly applied by courts and leads to mixed results. Additionally, the USPTO in
October 2020 issued a report on Public Views on Artificial Intelligence and Intellectual Property Policy, observing that
commentators “were nearly equally divided between the view that new intellectual property rights were necessary to
address AI inventions and the belief that the current U.S. IP framework was adequate to address AI inventions.” As
discussed below, however, the USPTO continues to hold the view that an inventor to a patent must be a natural person.
132
For a discussion of the need for explainability in the legal sector, see, Deeks, Ashley, The Judicial Demand for
Explainable Artificial Intelligence (August 1, 2019). 119 Colum. L. Rev. __ (2019 Forthcoming), Virginia Public
Law and Legal Theory Research Paper No. 2019-51, Available at SSRN: https://ssrn.com/abstract=3440723 (“A
recurrent concern about machine learning algorithms is that they operate as “black boxes,” making it difficult to
identify how and why the algorithms reach particular decisions, recommendations, or predictions. Yet judges will
confront machine learning algorithms with increasing frequency, including in criminal, administrative, and tort
cases. This Essay argues that judges should demand explanations for these algorithmic outcomes. One way to
address the “black box” problem is to design systems that explain how the algorithms reach their conclusions or
predictions. If and as judges demand these explanations, they will play a seminal role in shaping the nature and
form of “explainable artificial intelligence” (or “xAI”). Using the tools of the common law, courts can develop
what xAI should mean in different legal contexts.”)

21
with regulating something that already exists, but also in everything that contributes to its development in the first
place. Questions about what is built, why and for whom. We know that AI does not emerge out of a vacuum.
Technological developments are shaped by business models, by public discourse, national values and structural
power dynamics. Every aspect of AI development and use from who was in the room to what tools are built to
what training data is used to how to systems are tested and how they're monetized, all of these inputs in context
are shaped by human decisions and historical realities about power and technology.

This is always true for technological developments, but it's particularly noteworthy for AI, and that's because we're
creating computer systems that automate human decisions and practices. Which means that the values and the
biases that we encode into those AI systems are going to be amplified around the world.

All of that makes governing AI hard, but it also means that there are numerous opportunities to create change.
It's not only the technologists and the lawmakers who have decision making power. It's also the journalists, the
tech workers, the advocates, academics. Actors from all disciplines can also change the discourse and practices
around AI. AI’s influence in our lives is already substantial and it will be significantly more so.

What is happening so far to govern the AI legal and ethical developments in this technological revolution, has
largely been driven by tech employees. Jessica Newman of the Center for Long-Term Cybersecurity at U.C. Berkeley,
shared a framework how the AI governance landscape is unfolding using a comparative lens to highlight gaps and
opportunities for action. Numerous companies, organizations and governments have made strides over the past
few years on strategies and principles to guide AI. But how do we make sense of them and relate them to the
important issues that we face? The framework assesses and compares what these strategies are really doing and
where they're focusing their attention.

The framework is called the “AI Security Map ” 133and it includes 20 policy priorities for the development of AI
across four different security domains. These include digital and physical, political, economic, and social. Each topic
describes a priority intended to help mitigate an issue that could cause significant harm to people or cause systems
level destabilization. These are some of the most pressing issues in the global security landscape of AI, which are
top contenders for our governance efforts. For example, within the digital and physical domain, the map includes
the priority of having AI systems that are robust against attack. And the responsible use of AI in the military. The
political domain includes issues such as mitigating technological labor displacement and reducing AI induced
inequalities. And the social domain includes issues such as privacy and data rights. The AI Security Map 134 below
can be used to provide a visual representation of high level synergies and divergences among key actors in the AI
ecosystem. There are

133
Jessica Newman, CLTC Report, Toward AI Security: Global Aspirations for a More Resilient Future, (Feb.
2019) Figure I. AI Security Map https://cltc.berkeley.edu/wp-
content/uploads/2019/02/Toward_AI_Security.pdf
134
Id. at 14 (Feb. 2019) Figure I. AI Security Map

22
are

limitations to this framework, the maps only show a snapshot of priorities as indicated
in single document or initiative and should not be interpreted as fully representing the priorities
of an entire institution or government.

Consider Google's AI principles, mapped in the Figure above. The principles were published by Google in June of
2018. 135 The topics covered by Google’s AI Principles, are mapped in the Figure above. The principles prioritize
the creation of safe, reliable and robust AI systems that are appropriately transparent, fair, non-harmful, and in
line with international law and human rights.

The map has some gaps. Google's AI principles do not talk about issues related to the political security domain,
such as the spread of disinformation, or the opportunity to support government services or checks against
surveillance. The principles do not reference issues related to the economic security domain, including job
displacement and proving educational resources, preventing growing inequality or supporting market competition.
Google's priorities for AI as represented in these principles fall solely within the digital, physical and the social
domains.

These gaps aren't that surprising for a tech company. But the gaps highlight the reality that industry leaders are
unlikely to address the full set of global security challenges posed by AI systems. A white paper that the company
published pointed out some contentious uses of AI could have such a transformational effect on society that relying
on companies alone to set standards would be inappropriate, Not because companies can't be trusted to be
impartial and responsible, but because to delegate such decisions to companies would be undemocratic. 136

135
Google, https://ai.google/principles/
136
In relation to facial recognition technology, Microsoft’s President Brad Smith stated: “While we appreciate that
some people today are calling for tech companies to make these decisions – and we recognize a clear need for
our own exercise of responsibility . . . we believe this is an inadequate substitute for decision making by the
public and its representatives in a democratic republic.”114 Brad Smith, Facial Recognition Technology: The Need
for Public Regulation and Corporate Responsibility, MICROSOFT (July 13, 2018), https://blogs.microsoft.com/on-
the-issues/2018/07/13/facial-recognitiontechnology-the-need-for-public-regulation-and-corporate-responsibility/.

23
Assuming that Congress produces significant legislation on privacy, machine learning algorithms, or law
enforcement uses of tools such as facial-recognition software, the same types of mechanisms that constrain the
national security Executive might helpfully constrain the technologies and companies. 137

Public pressure and critiques already have played an important role in prompting companies such as Facebook and
Twitter to establish more robust policies on user privacy and content regulation. This pressure has also forced the
companies to be more transparent about their privacy and content moderation policies and the algorithms that
they use to identify trolls and harassers. 138 Further, public criticism has led Facebook to remove the accounts of
particular actors, including those of twenty Burmese officials and organizations responsible for what the United
Nations concluded was genocide against the Rohingya. 139 These new pressures come not only from the
technologies’ users but also from the companies’ employees. 140 Facing demands from its employees, Google
declined to extend its contract with the Defense Department, under which the company provided support to a
project deploying machine learning algorithms to war zones. 141Amazon is facing a similar challenge: 450 of
its employees reportedly wrote to CEO Jeff Bezos to demand that Amazon cease selling its facial-recognition
software (which the company calls Rekognition) to police. 142

See also Deeks, Ashley, Facebook Unbound? (February 25, 2019). Virginia Law Review, Vol. 105, 2019, Virginia
Public Law and Legal Theory Research Paper No. 2019-08, Available at SSRN:
https://ssrn.com/abstract=3341590
137
Deeks, Ashley, Facebook Unbound? (February 25, 2019). Virginia Law Review, Vol. 105, 2019, Virginia Public
Law and Legal Theory Research Paper No. 2019-08, Available at SSRN: https://ssrn.com/abstract=3341590
(“This essay examines a different context in which our checks and balances have proven weak: the national
security space. It recounts the basic challenges that the other two branches have faced in checking the
Executive’s national security activities. The essay then identifies the ways in which those challenges resonate in
the context of checking technology companies, helping us to understand why it has proven difficult for Congress
and the courts (and the Executive) to weave a set of legal constraints around technology companies that offer
us social media platforms, build advanced law enforcement tools, and employ machine learning algorithms to
help us search, buy, and drive. The essay explores alternative sources of constraints on the national security
Executive, drawing inspiration from those constraints to envision other ways to shape the behavior of today’s
technology behemoths and other companies whose products are driven by our data.”)
138
Frier, Facebook Publishes Content Removal Policies for the First Time, Bloomberg (Apr. 24, 2018),
https://www.bloomberg.com/news/articles/2018-04-24/faceboo k-publishescontent-removal-policies-for-the-first-
time [https://perma.cc/4T4E-CFV7] (noti- ng that the “release of the document follows frequent criticism and
confusion about the company’s policies”); Julia Carrie Wong, Twitter Announces Global Change to Algorithm in
Effort to Tackle Harassment, Guardian (May 15, 2018),
https://www.theguardian.com/technology/2018/may/15/twitter-ranking-algorithm-change-trolling-harassment-
abuse[https://perma.cc/9LR5-THZT]
139
Antoni Slodkowski, Facebook Bans Myanmar Army Chief, Others in Unprecedented Move, Reuters (Aug. 27,
2018), https://www.reuters.com/article/us-myanmar-facebook-/facebook-bans-myanmar-army-chief-others-
in-unprecedented-move-idUSKCN1LC0R7 [https://perma.cc/MU2B-RJU5].
140
See Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv.
L. Rev. 1598, 1627–28 (2018).
141
Daisuke Wakabayashi & Scott Shane, Google Will Not Renew Pentagon Contract That Upset Employees, N.Y.
Times (June 1, 2018), https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-
maven.html [https://perma.cc/TAN3-N7QB]. See also Farhad Manjoo, Why the Google Walkout was a
Watershed Moment in Tech, N.Y. Times (Nov. 7, 2018),
https://www.nytimes.com/2018/11/07/technology/google-walkout-watershed-tech.html
[https://perma.cc/52SF-DS75] (“Protests by [Google’s] workers are an important new avenue for pressure;
the very people who make these companies work can change what they do in the world.”).
142
Isabel Asher Hamilton, An Amazon Staffer Says Over 450 Employees Wrote to Jeff Bezos Demanding
Amazon Stop Selling Facial-Recognition Software to Police, Bus. Insider (Oct. 17, 2018),
https://www.businessinsider.com/amazon-employee-letter-jeff-bezos-facialrecognition-software-police-2018-
10 [https://perma.cc/4C93-ARDP].

24
Like the national security Executive, these companies also are keenly attuned to potential litigation or legislation,
and often change their behavior in an effort to fend off those alternatives. Microsoft in particular has been forward-
leaning in an effort to help shape any legislation that might come down the pike. In testimony before the U.K.
Parliament about regulation of artificial intelligence (“AI”), a Microsoft official told the committee that regulating AI
was a job “for the tech industry, the Government, NGOs and the people who will ultimately consume the services”
and that it was important “to find a way of convening those four parties together to drive forward that
conversation.” 143 Microsoft has also asked Congress to regulate facial-recognition software and has suggested
specific areas on which Congress might focus.144

Microsoft, Twitter, and Google all revealed how Russian agents had used their platforms prior to their officials’
testimony before Congress, where they expected to be asked about that topic. 145 Facebook announced its
strengthened advertising disclosure policies in an attempt to preempt a bill imposing such requirements by law.
More recently, Facebook revealed its intention to create an international body to adjudicate content decisions,
which may well be an effort to stave off more stringent regulation by Congress. 146 Google’s CEO declined to
appear before Congress, for example, even though he faced significant public pressure to do so. 147 In general,
though, even if Congress cannot unite to enact laws, it has managed to convene congressional hearings that have
extracted important information and policy changes from the tech companies.

Foreign governments have also imposed constraints on U.S. technology companies. Just as the U.S. military and
intelligence communities sometimes find themselves bound by foreign laws during overseas operations, the U.S.
tech companies face direct exposure to foreign legal systems, which have in several cases imposed laws and
penalties on them. For example, the EU’s General Data Protection Regulation (“GDPR”) requires companies that
process personal data to obtain the affirmative consent of those whose data they are using (the “data subject”).
148
Those processors must also provide, at the data subject’s request, any information they have on the subject;
must rectify inaccurate personal data; and must erase the subject’s data at her request. Finally, the GDPR generally
prohibits companies from transferring personal data outside the EU, unless the European Commission determines
that the data protection laws of the receiving jurisdiction are adequate. 149 Although formally directed only to
companies that are located in the EU or that provide services to or monitor the behavior of people in the EU, the
GDPR’s impact has been global. Virtually all of the companies discussed in this essay must comply with the GDPR.
The EU also fined Google $2.7 billion for disadvantaging its competition by steering search engine users toward its

143
Science and Technology Committee, Robotics and Artificial Intelligence, 2016–17, HC 145, ¶ 66 (UK).
144
https://blogs.microsoft.com/on-the-issues/2018/07/13/facial-recognition-technology-theneed-for-public-
regulation-and-corporate-responsibility [https://perma.cc/MT9Q-GZW4].
145
Mike Isaac & Daisuke Wakabayashi, Russian Influence Reached 126 Million Through Facebook Alone, N.Y.
Times (Oct. 30, 2017), https://www.nytimes.com/2017/10/30/technology/facebook-google-russia.html
[https://perma.cc/UW23-ZGG5].
146
Neil Malhotra, Benoit Monin & Michael Tomz, Does Private Regulation Preempt Public Regulation?, Am. Pol.
Sci. Rev. 1 (2018); Evelyn Douek, Facebook’s New ‘Supreme Court’ Could Revolutionize Online Speech,
Lawfare (Nov. 19. 2018), https://www.lawfareblog.com/facebooks-new-supreme-court-could-revolutionize-
online-speech [https://perma.cc/AHM4-WEMA].
147
Steven T. Dennis, Senators Criticize Google CEO for Declining to Testify, Bloomberg (Aug. 28, 2018),
https://www.bloomberg.com/news/articles/2018-08-28/google-ceo-pichaifaulted-by-senators-for-declining-to-
testify [https://perma.cc/4K4E-2R5Q]. 53 Regulation (EU) 2016/679 of the European Parliament and of the
Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data
and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection
Regulation), art. 7, 2016 O.J. (L 119) 2.
148
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection
of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data,
and Repealing Directive 95/46/EC (General Data Protection Regulation), art. 7, 2016 O.J. (L 119) 2.
149
Id. arts. 44–46, at 8–9.

25
comparison-shopping site. 150 The EU apparently also is considering whether to bring a case against Amazon. 151
In short, foreign governments have constrained U.S. tech companies, even when the U.S. government itself has
not.

Assuming that Congress will be unable—at least in the short term—to produce significant legislation on privacy,
machine learning algorithms, or law enforcement uses of tools such as facial-recognition software, the same types
of mechanisms that constrain the national security Executive might helpfully constrain the technologies and
companies. 152

Public pressure and critiques already have played an important role in prompting companies such as Facebook and
Twitter to establish more robust policies on user privacy and content regulation. This pressure has also forced the
companies to be more transparent about their privacy and content moderation policies and the algorithms that
they use to identify trolls and harassers. 153 Further, public criticism has led Facebook to remove the accounts of
particular actors, including those of twenty Burmese officials and organizations responsible for what the United
Nations concluded was genocide against the Rohingya. 154 These new pressures come not only from the
technologies’ users but also from the employees.

Many of the same dynamics that have made it difficult to rein in a powerful national security Executive are playing
out in the technology space—leading to what we might call the “Facebook Unbound” phenomenon. 155 Indeed, the

150
Robert Levine, Antitrust Law Never Envisioned Massive Tech Companies Like Google, Bos. Globe (June 13,
2018), https://www.bostonglobe.com/ideas/2018/06/13/google-hugelypowerful-antitrust-law-
job/E1eqrlQ01g11DRM8I9FxwO/story.html [https://perma.cc/ZH7D-TEVR].
151
Guadalupe Gonzales, E.U. Antitrust Commission Sets Sights on Amazon. Here’s Why, Inc. (Sept. 21, 2018),
https://www.inc.com/guadalupe-gonzalez/amazon-margrethe-vestagerpreliminary-investigation.html
[https://perma.cc/EP6Z-4SHB].
152
Deeks, Ashley, Facebook Unbound? (February 25, 2019). Virginia Law Review, Vol. 105, 2019, Virginia Public
Law and Legal Theory Research Paper No. 2019-08, Available at SSRN: https://ssrn.com/abstract=3341590
(This essay examines a different context in which our checks and balances have proven weak: the national
security space. It recounts the basic challenges that the other two branches have faced in checking the
Executive’s national security activities. The essay then identifies the ways in which those challenges resonate in
the context of checking technology companies, helping us to understand why it has proven difficult for Congress
and the courts (and the Executive) to weave a set of legal constraints around technology companies that offer
us social media platforms, build advanced law enforcement tools, and employ machine learning algorithms to
help us search, buy, and drive. The essay explores alternative sources of constraints on the national security
Executive, drawing inspiration from those constraints to envision other ways to shape the behavior of today’s
technology behemoths and other companies whose products are driven by our data.”)
153
Sarah Frier, Facebook Publishes Content Removal Policies for the First Time, Bloomberg (Apr. 24, 2018),
https://www.bloomberg.com/news/articles/2018-04-24/face boo k-publishes-content-removal-policies-for-
the-first-time [https://perma.cc/4T4E-CFV7] (noti- ng that the “release of the document follows frequent
criticism and confusion about the company’s policies”); Julia Carrie Wong, Twitter Announces Global Change
to Algorithm in Effort to Tackle Harassment, Guardian (May 15, 2018), https://www.theguardian.com/
technology/2018/may/15/twitter-ranking-algorithm-change-trolling-harassment-abuse
[https://perma.cc/9LR5-THZT].
154
Antoni Slodkowski, Facebook Bans Myanmar Army Chief, Others in Unprecedented Move, Reuters (Aug. 27,
2018), https://www.reuters.com/article/us-myanmar-facebook-/facebook-bans-myanmar-army-chief-others-
in-unprecedented-move-idUSKCN1LC0R7 [https://perma.cc/MU2B-RJU5].
155
“The reasons for the failures to constrain the national security Executive and technology companies are not
unique to these two contexts. However, because there are several important similarities, certain lessons from
the national security context can inform how we might proceed in the technology context. Nor do I mean to
suggest an overly strong identity between the Executive and powerful technology companies. It should go
without saying that there are significant differences between the two. The President faces democratic
accountability; the tech companies do not. Unlike the President, tech companies cannot veto legislation. Nor
can they invoke executive privilege when Congress asks for information. The companies are not entitled to
deference by courts, and it is easier to hold them accountable when they violate the law.” Deeks, Ashley,

26
academic and foreign-policy conversation about the Executive’s undue power in the national security space, which
was a constant refrain in the post-9/11 era, has died down, to be replaced by conversation about the undue power
of large technology companies. 156 Growing numbers protest tech companies’ power and the lack of restrictions on
how they use our data or control content on their platforms, and on how the government uses their products in
ways that implicate our privacy. The journalist Farhad Manjoo, for example, has adopted the term “Frightful Five”
to refer to Amazon, Apple, Facebook, Google, and Microsoft (all of which own other major technology and consumer
products companies, including WhatsApp, Instagram, Waze, YouTube, Audible, Zappos, Whole Foods, and Waymo).
157
Other technology companies that have faced limited regulation include social media platforms such as Twitter;
manufacturers of self-driving cars; Uber and Lyft; and companies that use “big data” and machine learning
algorithms to produce highly sophisticated, privacy-implicating technologies for the U.S. military and federal, state,
and local law enforcement.158

What unites these companies is their systematic collection and use of vast amounts of user data to make their
products more powerful and their use of machine learning algorithms based on that data to make their systems
more effective and more profitable. Some observers are untroubled by the relative lack of constraints on these
companies and worry far more about the fact that the national security Executive is unbound. After all, the
Executive can impose more severe sanctions and direct physical effects on individuals than companies can. In any
case, these technology companies wield enormous control over our lives on a daily basis. 159 It is therefore worth
exploring why governments have done so little to regulate these companies.

The factors that have led to the lack of constraints on these technology companies are markedly similar to those
that have produced the national security Executive. First, members of Congress lack sophisticated understandings
of how these companies—and the technologies that comprise their products—work. This was brought into sharp
focus when the Senate summoned Facebook CEO Mark Zuckerberg to testify about the company’s privacy policies,
data leaks, and Russian interference with the 2016 U.S. presidential election. At one point, Senator Orrin Hatch
asked Zuckerberg how Facebook managed to make money; Zuckerberg responded, “Senator, we run ads.” 160As

Facebook Unbound? (February 25, 2019). Virginia Law Review, Vol. 105, 2019, Virginia Public Law and Legal
Theory Research Paper No. 2019-08, Available at SSRN: https://ssrn.com/abstract=3341590
156
See, e.g., How 5 Tech Giants Have Become More Like Governments Than Companies, NPR (Oct. 26, 2017)
https://www.npr.org/2017/10/26/560136311/how-5-tech-giants-have-become-more-like-governments-than-
companies [https://perma.cc/C58F-ETVD] (interview of Farhad Manjoo, a tech columnist for the New York
Times) [hereinafter Tech Giants] (“Amazon is sort of . . . getting its kind of corporate tentacles into a large
part of the economy, into shipping, and how warehouses work and robots. Things that will allow it to
dominate in the future that we’re kind of just not good at regulating at this point.”); see also Stephen L.
Carter, Too Much Power Lies in Tech Companies’ Hands, Bloomberg (Aug. 17, 2017),
https://www.bloomberg.com/opinion/articles/2017-08-17/too-much-power-lies-in-tech-companies-hands
[https://perma.cc/EM46-SAEY].
157
Farhad Manjoo, Tech’s ‘Frightful 5’ Will Dominate Digital Life for Foreseeable Future, N.Y. Times (Jan. 20,
2016), https://www.nytimes.com/2016/01/21/technology/techs-frightful-5-will-dominate-digital-life-for-
foreseeable-future.html [https://perma.cc/8NXJVT3L]; Tech Giants, supra note 15 (discussing the subsidiaries
that the “Frightful Five” own).
158
See, e.g., Ben Tarnoff, Weaponizing AI is coming. Are algorithmic forever wars our future?, Guardian (Oct.
11, 2018), https://www.theguardian.com/commentisfree/2018/oct/11 /war-jedi-algorithmic-warfare-us-
military [https://perma.cc/3LNH-E7N2].
159
See, e.g., Rebecca MacKinnon, Consent of the Networked: The Worldwide Struggle for Internet Freedom
149–65 (2012) (describing major technology companies as “digital sovereigns”); Tech Giants, supra note 15
(discussing how Amazon, Google, Apple, Microsoft, and Facebook affect the economy, our elections, our jobs,
and what we buy; how they innovate more aggressively than the U.S. government; how they act as
gateways to many other products we use; and how they may suppress others’ innovations).
160
Nancy Scola, Zuckerberg Survived But Facebook Still Has Problems, Politico (Apr. 10, 2018),
https://www.politico.com/story/2018/04/10/zuckerberg-facebook-hearing-senate-474055
[https://perma.cc/V4JL-37JH].

27
Daniel Solove has written, “There may be a few in Congress with a good understanding of . . . technology, but
many lack the foggiest idea about how new technologies work.”161

Knowing what to regulate, in what level of detail, and at what stage in the overall development of technologies
such as machine learning is simply hard. 162 Laws can easily be overtaken by events in fast-changing areas such
as war fighting or technology. 163 Third, Congress fears undercutting U.S. innovation by regulating too soon, which
is not unlike Congress’s fear of deliberately reining in the Executive’s national security decisions, particularly in the
face of threats from other actors who have not chosen to self-constrain. 164The United States seeks to out-innovate
China; members of Congress will not want to stand accused of slowing down U.S. companies that are developing
artificial intelligence, for instance, while Chinese companies press ahead. Finally, partisanship has kicked in when
Congress has tried to regulate.165

Congress has enacted some rules regulating technology. Congress has enacted laws regulating cross-border data
requests by law enforcement,166 holding online platforms accountable if they are used to facilitate sex trafficking, 167
and updating the Foreign Intelligence Surveillance Act. 168 Congress failed in its efforts to legislate on the use of
encryption, election security, “hacking back,” and drone safety, and it has not tried to regulate facial-recognition
software.169

161
Daniel J. Solove, Fourth Amendment Codification and Professor Kerr’s Misguided Call for Judicial Deference,
74 Fordham L. Rev. 747, 771 (2005).
162
Info. Soc’y Project, Governing Machine Learning
(2017),https://law.yale.edu/system/files/area/center/isp/documents/governing_machine_learning_-_final.pdf
[https://perma.cc/2P FE-6HBZ] [hereinafter Governing Machine Learning].
163
Richard H. Pildes, Law and the President, 125 Harv. L. Rev. 1381, 1387 (2012) (reviewing Posner &
Vermeule, supra note 5) (summarizing Posner and Vermeule’s argument that, because technology is
constantly shifting, it is better for Presidents to make their best judgments based on the actual circumstances
then governing); Katy Steinmetz, Congress Never Wanted to Regulate Facebook. Until Now, Time (Apr. 12,
2018), http://time.com/5237432/congress-never-wanted-to-regulate-facebook-until-now/ [https://-
perma.cc/GF4L-3CMW] (“Congress is always playing catch-up to technology, so statutes it writes can quickly
become outdated.”).
164
Klint Finley, Obama Wants the Government to Help Develop AI, Wired (Oct. 12, 2016),
https://www.wired.com/2016/10/obama-envisions-ai-new-apollo-program/ [https://perma.cc/3TEH-FX6E]
(quoting President Obama as stating, “The way I’ve been thinking about the regulatory structure as AI
emerges is that, early in a technology, a thousand flowers should bloom. And the government should add a
relatively light touch. . . . As technologies emerge and mature, then figuring out how they get incorporated
into existing regulatory structures becomes a tougher problem, and the government needs to be involved a
little bit more.”); David Shepardson & Susan Heavey, Amazon, Apple, others to testify before U.S. Senate on
data privacy September 26, Reuters (Sept. 12, 2018), https://ww w.reuters.com/article/us-usa-tech-
congress/amazon-apple-others-to-testify-before-u-s-senate-on-data-privacy-september-26-idUSKCN1LS25P
[https://perma.cc/G5JV-9WGW] (quoting Sen. John Thune as stating that Commerce Committee hearing
would allow tech companies to testify about “what Congress can do to promote clear privacy expectations
without hurting innovation”); see also Governing Machine Learning, supra note 21 (reflecting participants’
views that standardizing the regulation of machine learning “would stifle innovation in a nascent industry,
attempt to solve for problems that haven’t yet arisen, and potentially create barriers to entry for new
entrants”).
165
See, e.g., Paul Blumenthal, The Last Time Congress Threatened to Enact Digital Privacy Laws, It Didn’t Go So
Well, Huff. Post (July 27, 2018), https://www.huffingtonpo st.com/entry/congress-digital-privacy-
laws_us_5af0c587e4b0ab5c3d68b98b [https://perma.cc/82TJ-ESVA].
166
Consolidated Appropriations Act, 2018, Pub. L. No. 115-141, Div. V, § 103 (2018) (codified at 18 U.S.C. §
2703(h)).
167
Allow States and Victims to Fight Online Sex Trafficking Act of 2017, Pub. L. No. 115-164, § 4, 132 Stat.
1253, 1254 (2018) (codified at 47 U.S.C. § 230(e)).
168
FISA Amendments Reauthorization Act of 2017, Pub. L. No.115-118, 132 Stat. 3 (2018) (codified at 50
U.S.C. §§ 1881 –1881g).
169
See, e.g., Jacob Rush, Hacking the Right to Vote, 105 Va. L. Rev. Online 67 (2019) (discussing Congress’s
failure to regulate election security); Dustin Volz, Mark Hosenball & Joseph Menn, Push for encryption law

28
The Executive has done little to bind Facebook and the various other types of technology companies. We thus find
ourselves confronting broadly unregulated technology actors that know and use oceans of information about us,
holding vast amounts of power over what we read, buy, watch, think, and drive. 170

US Department of Defense Establishes Joint Artificial Intelligence Center, Emphasizes Ethics

In a memorandum issued June 27, 2018, Deputy Secretary of Defense Patrick Shanahan ordered the establishment
of the Joint Artificial Intelligence Center (“JAIC”) within DoD. 171 With the creation of the JAIC, the DoD has
acknowledged that the AI “effort is a Department priority,” and one to which government contractors should pay
attention. 172

The memorandum elaborates upon what is in the report by stating that “A new approach is required to increase
the speed and agility with which we deliver AI-enabled capabilities and adapt our way of fighting.”

The memorandum lays out four steps that the JAIC will take to achieve its overarching goal and execute the 2018
Artificial Intelligence Strategy. First, it will launch “large-scale efforts to apply AI to a cluster of closely related,
urgent, joint challenges.” These National Mission Initiatives (“NMI”) “will be developed in partnership with the
Military Departments and Services, Joint Staff, CCMDs, other DoD components, and mission owners.” Second, the
JAIC will “establish a Department-wide common foundation for execution in AI that includes the tools, shared data,
reusable technologies, processes, and expertise to enable rapid delivery and Department-wide scaling of AI-enabled
capabilities.” Third, the JAIC will “strengthen partnerships, highlight critical needs, solve problems of urgent
operational significance, and adapt AI technologies” through collaboration within the government and with industry
and other strategic partners. Fourth, the JAIC will work with the Office of the Secretary of Defense (“OSD”) “to
develop a governance framework and standards for AI development and delivery.”

Regarding the fourth step in particular, the Defense Innovation Board (“DIB”) hosted a meeting on July 11, 2018
to further explain the mission and overarching goal of the JAIC and outline how the DIB will assist the JAIC in its
efforts. Given DoD’s recent experience with Google and the company’s pledge to forgo pursuit of future AI work
with DoD under Project Maven, discussed below, the DIB assisted in the development of AI principles for defense.
These principles focused on, but not be limited to, ethical development and use, humanitarian considerations, and
short-term and long-term AI safety.173

The DIB Science and Technology Subcommittee subsequently held a series of three roundtable discussions and
public listening sessions in January, March, and April of 2019. 174The DIB introduced proposed AI principles and
voted to approve them in its October 31, 2019, quarterly public meeting. 175

The use of AI by the military is long established, but the associated risks have not been fully addressed. While
drones are used to identify and “take out” targets, there have been a number of false identifications, resulting in
wrongful deaths.176Beyond the risk of drones with faulty facial recognition technology bombing innocent civilians

falters despite Apple case spotlight, Reuters (May 27,2016), https://www.reuters.com/article/us-usa-


encryption-legislation-idUSKCN0YI0EM
[https://perma.cc/93WR-UQB6] (discussing Congress’s failure to regulate encryption).
170
Deeks supra
171
https://admin.govexec.com/media/establishment_of_the_joint_artificial_intelligence_center_osd008412-
18_r....pdf
172
See Department of Defense. Joint Artificial Intelligence Center. November 21, 2019. DoD Tech Talk: JAIC.
http://meetings.ausa.org/autonomy/pdf/JAIC.pdf
173
https://innovation.defense.gov/ai/
174
2019 RAND Corp. study, “The Department of Defense Posture for Artificial Intelligence,”
https://www.rand.org/content/dam/rand/pubs/research_reports/RR4200/RR4229/RAND_RR4229.pdf
175
https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF
176
Bartneck C., Lütge C., Wagner A., Welsh S. (2021) Responsibility and Liability in the Case of AI Systems. In: An
Introduction to Ethics in Robotics and AI. SpringerBriefs in Ethics. Springer, Cham. https://doi.org/10.1007/978-3-030-

29
and facilities, there is also the very real risk that AI could usedto hack into military bases to steal information or
potentially activate weapons remotely. 177

In December 2020, the Pentagon, the State Department, the Department of Homeland Security, and nuclear labs
were hacked; 178although the extent of the stolen data has not yet been determined. it is remembered as one of
the most significant data breaches in the history of the U.S. The combination of drones, facial recognition AI, and
deep fakes could lead to an assassination if an enemy combatant’s image is replaced with an image of a world
leader though the hacking of the device. Intentional manipulation by AI can be used for great harm both inside
and outside of military use. Cambridge Analytica, for example, was able to gain access to data from 87 million
Facebook profiles, and target vulnerable voters with false news feeds about Hillary Clinton, which may have
contributed to her election loss.179

The risks AI presents to both our democracy and security cannot be overstated. AI can now impersonate humans,
180
learn from large data sets, identify faces, 181make predictions about human behavior, 182and suggest products
based on online activities.183 Due to potential errors in identification and predictions, the incorporation of human
prejudice into data sets, filter bubbles, and the lessening of human agency resulting from the irresponsible adoption
of AI, however, there are risks of intentional and unintentional harm. 184

Four major groups could benefit greatly from the application of AI, having a positive effect on large populations:
(1) health and hunger, (2) education, (3) security and justice, and (4) equality and inclusion. Millions of people
could benefit from wearable devices enabled with AI capable of acting as an early warning system. 185

The militarization of artificial intelligence is well under way and leading military powers have been investing large
resources in emerging technologies. emerging reality could feature swarms of drones that may overwhelm aircraft
carriers, cyberattacks on critical infrastructure, AI-guided nuclear weapons, and hypersonic missiles that
automatically launch when satellite sensors detect ominous actions by adversaries. It may seem to be a dystopian
future, but some of these capabilities are with us now. And the world’s liberal democracies, are struggling with
the moral and ethical implications of fully autonomous, lethal weapon systems. 186 Calls for AI governance at

51110-4_5; See also Drone Killings, CTR. FOR CONST. RTS., https://ccrjustice.org/home/what-we-do/issues/drone-
killings (2021).
177
Michael T. Klare, Autonomous Weapons Systems and the Laws of War, ARMS CONTROL ASSOC. (Mar. 2019),
https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war.
178
1David E. Sanger, et al., Scope of Russian Hacking Becomes Clear: Multiple U.S. Agencies Were Hit, N.Y. TIMES
(Dec. 14,2020), https://www.nytimes.com/2020/12/14/us/politics/russia-hack-nsa-homeland-security-pentagon.html.
179
Jarno Duursma, 12 Risks of Artificial Intelligence, JARNO, https://jarnoduursma.nl/blog/the-risks-of-artificial-
intelligence/
180
Ian Sample, What are Deepfakes – and How Can You Spot Them?, GUARDIAN (Jan. 13, 2020, 5:00 PM),
https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them.
181
Matthew Hutson, How researchers are teaching AI to learn like a child, SCIENCE (May 24, 2018, 10:20 AM),
https://www.sciencemag.org/news/2018/05/how-researchers-are-teaching-ai-learn-child.
182
Jeremy Fain, How Deep Learning Can Help Predict Human Behavior, FORBES (Apr. 30, 2018, 9:00 AM),
https://www.forbes.com/sites/forbesagencycouncil/2018/04/30/how-deep-learning-can-help-predict-
humanbehavior/?sh=7617e4c55547.
183
Niccolo Mejia, How AI-Enabled Product Recommendations Work – A Brief https://emerj.com/ai-sector-overviews/ai-
enabled-product-recommendations/
184
Houser, Kimberly, Artificial Intelligence and The Struggle Between Good and Evil (January 25, 2021). Washburn Law
Journal, Vol. 60, Available at SSRN: https://ssrn.com/abstract=3795427
185
Michael Chui et al., Notes from The AI Frontier: Applying AI for Social Good, McKinsey Global Institute (Dec. 2018),
https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/applying%20artificial%20
intelligence%20for%20social%20good/mgi-applying-ai-for-social-good-discussion-paper-dec-2018.ashx.
186
Darrell M. West and John R. Allen, Turning Point: Policymaking in the Era of Artificial Intelligence, Brookings
Institution Press, (Jul 28, 2020) (Turning Point examines the competition between the US, China, Russia and other
nations, such as Iran and North Korea are examined, including the (overly) dramatic 2017 claim by Russian Prime
Minister Putin: “Artificial Intelligence is the future, not for Russia, but for all humankind….Whoever becomes the

30
international level are expected to increase and the United Nations is well positioned to offer a commonly agreed
platform for prevention, foresight, and cooperation among states and other stakeholders to address the impact of
new technologies. 187 there are some international entities that already are working on these issues. For example,
the Global Partnership on Artificial Intelligence 188 is a group of more than a dozen democratic nations that have
agreed to “support the responsible and human-centric development and use of AI in a manner consistent with
human rights, fundamental freedoms, and our shared democratic values.” This community of democracies is run
by the Organization for Economic Cooperation and Development and features high-level convenings, research, and
technical assistance.

A telling example for the area of strategic studies is the work of the Group of Governmental Experts (GGE) on
lethal autonomous weapons systems, under the UN Convention on Certain Conventional Weapons (CCW), the most
important multilateral discussion on AI, peace and security today. 189

AI will have a profound impact upon world politics across the board, as much as upon society generally. Converging
technologies that often overlap and affect each other have become drivers of innovation, as in the case of
nanotechnology, biotechnology, information technology, and cognitive science (NBIC). Usage of AI in business,
government, and everyday life has been developing at breakneck pace, fueled by powerful hardware capacity,
abundant data, and online training of self-learning algorithms. As the discipline of International Relations (IR) takes
on this expanding research field and its systemic implications, a coming AI race has frequently been portrayed as
a perilous trend capable of reshaping the character of international security. 190

An overview of state initiatives, strategies, and related governmental investments in AI reported many countries
had active AI national plans and, not surprisingly, almost all of them were developed countries. 191Indeed, there is
no shortage of analyses focusing upon great-power competition and an epic struggle or race for AI global
supremacy, picturing prominently the United States and China, followed closely by military powers and wealthy

leader in this space will become the ruler of the world.” The authors predict that ‘AI will dramatically increase the
speed of war’. They briefly examine AI’s necessity to provide the shoot down capability aimed at Russian’s offensive
hypersonic missiles, a technology which Russia is aggressively fielding to offset U.S. war making capabilities. The
book also addresses Russian use of AI in ‘strategic influence campaigns and micro targeting’ of messaging, and efforts
to flood opponents systems with ‘deep fakes’ built with AI, as well as Russian efforts to disrupt GPS data systems that
many highly accurate and lethal U.S. weapons systems depend upon. The potential growth in the near time horizon to
build and disseminate Lethal Autonomous Weapons Systems (LAWS), such as perimeter robots with cannon and
machine guns and no on site human operator will particularly raise profoundly difficult ethical and human rights issues
by removing human judgment in the application of deadly force. The book addresses AI’s increasing role in creating
swarming ‘kamikazie’ kill drone attacks, and, of course the AI enhanced systems needed to defeat the swarm. The book
was written before the recent use of such swarm drone attacks against Armenian forces by Turkish equipped Azerbaijan
forces in Nagorno-Karabakh.)
187
John R. Allen and Darrell M. West, It is time to negotiate global treaties on artificial intelligence, Brookings Institution
(March 24, 2001) Brookings Institution
188
Launched in June 2020, GPAI is the fruition of an idea developed within the G7, under the Canadian and French
presidencies. GPAI’s 15 founding members are Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New
Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom, the United States and the European Union.
They were joined by Brazil, the Netherlands, Poland and Spain in December 2020. https://gpai.ai/
189
Garcia, Eugenio V, The Militarization of Artificial Intelligence: A Wake-Up Call for the Global South (September 10,
2019). Available at SSRN: https://ssrn.com/abstract=3452323 or http://dx.doi.org/10.2139/ssrn.3452323.
190
Christian Brose, War’s sci-fi future: the new revolution in military affairs, Foreign Affairs, vol. 98, n. 3, May-June
2019, p. 122-134; Kenneth Payne, Artificial intelligence: a revolution in strategic affairs? Survival, vol. 60, Issue 5,
2018, p. 7-32; Paige Gasser, et al. Assessing the strategic effects of artificial intelligence, Center for Global Security
Research, Lawrence Livermore National Laboratory, and Technology for Global Security, 2018,
https://www.tech4gs.org/assessing-the-strategic-effects-of-artificial-intelligence.html (access 22 April 2019);Jeremy
Rabkin and John Yoo, Striking power: how cyber, robots, and space weapons change the rules of war, New York:
Encounter Books, 2017.
191
Thomas A. Campbell, Artificial intelligence: an overview of state initiatives, UNICRI and FutureGrasp, 2019, cf.
Executive Summary, http://www.unicri.it/in_focus/files/Report_AIAn_Overview_of_State_Initiatives_FutureGrasp_7-
23-19.pdf (access 29 July 2019).

31
nations (Russia, G7 countries, Israel, and others), and a few second-tier, aspiring tech players, which may occupy
someday a special place in the AI landscape. 192

Scholars note the Global South, however, is clearly underrepresented in this debate, with many areas of Africa,
Asia, Latin America and the Caribbean completely absent in terms of scholars, politicians, and policymakers
engaged in this vital conversation. And yet, when it comes to assess global risks in peace and security and identify
where new AI weapons are likely to be deployed in the first place, both researchers and practitioners in the
developing world have reasons to be concerned. It is no wonder that proponents of a Global IR have been urging
for greater participation from actors of the non-Western world as a means to ‘bring the Rest in’. 193

The militarization of AI is well under way Strategists and military advisers often claim that the militarization of AI
is irreversible.194

Resembling what had occurred with previous general-purpose technologies, such as electricity or the combustion
engine, armed forces will seek to incorporate AI-driven capabilities in their organizational structure, operations,
and weaponry. In reality, this is already the case in many missile and rocket defense systems, anti-personnel sentry
weapons, loitering munitions, and combat air, sea, and ground vehicles.

A Stockholm International Peace Research Institute (“SIPRI”) report, released in 2017, mapped the development
of autonomy in weapons systems to conclude that ‘autonomy is already used in a wide array of tasks in weapon
systems, including many connected to the use of force’, such as supporting target identification, tracking,
prioritization, and selection of targets in certain cases. 195 In 2019, another study corroborated these findings by
surveying current military research and development in autonomous weapons conducted in seven countries (United
States, China, Russia, United Kingdom, France, Israel, and South Korea), which stand out among states most
heavily involved in AI development in the defense industry. 196

In a move largely unnoticed by the general public, Russia sent to the International Space Station, in August 2019,
the Skybot F-850 humanoid robot, also called FEDOR (Final Experimental Demonstration Object Research), which
will allegedly undergo tests for missions in outer space, such as helping in construction of bases on the moon.
Remote-controlled by a human operator wearing an exoskeleton, this is the same robot depicted in 2017 in online
videos as being able to walk, crawl, lift weights, use tools, drive a car, and shoot with two guns simultaneously. 197

AI military applications go much beyond gun-shooting. Their wide ranging scope will potentially unsettle all five
domains of warfare (land, sea, air, outer space, and cyberspace) and multiple dimensions pertaining to command,
control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR). Prompted by promises

192
Kai-Fu Lee, AI superpowers: China, Silicon Valley, and the new world order, Boston: Houghton Mifflin Harcourt, 2018;
Daniel Wagner and Keith Furst, AI supremacy: winning in the era of machine learning, Scotts Valley, CA:CreateSpace,
2018; Michael Horowitz, Artificial intelligence, international competition, and the balance of power, Texas National
Security Review, Austin: University of Texas, 2018, https://tnsr.org/2018/05/artificial-intelligenceinternational-
competition-and-the-balance-of-power/ (access 21 November 2018); Patrick Tucker et al. The race for AI: the return of
great power competition is spurring the quest to develop artificial intelligence for military purposes, Defense One
ebook, March 2018, https://www.defenseone.com/assets/race-ai/portal/ (access 23 April 2019).
193
Garcia, Eugenio V, The Militarization of Artificial Intelligence: A Wake-Up Call for the Global South at 2 (September
10, 2019). Available at SSRN: https://ssrn.com/abstract=3452323 or http://dx.doi.org/10.2139/ssrn.3452323
194
Michael Kolton, The inevitable militarization of artificial intelligence, The Cyber Defense Review, 8 February 2016
https://cyberdefensereview.army.mil/CDR-Content/Articles/Article-View/Article/1136088/the-inevitable-militarization-
of-artificial-intelligence/
195
Vincent Boulanin and Maaike Verbruggen, Mapping the development of autonomy in weapons systems,
Stockholm:SIPRI, 2017, p. 115. https://www.sipri.org/sites/default/files/2017-
11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf
196
Frank Slijper et al. State of AI: artificial intelligence, the military and increasingly autonomous weapons, Utrecht: PAX,
April 2019, https://www.paxvoorvrede.nl/media/files/state-of-artificial-intelligence--pax-report.pdf
197
Humanoid robot ‘commands’ Russian rocket test flight, CBS , 22August019,https://www.cbsnews.com/news/humanoid-
robot-passenger-russians-launch-key-rocket-test-flight-skybot-f850-fedor/

32
of more efficient data analysis, better performance of warfare systems, and reduced costs, great powers project
spend, trying to pursue a strategic security advantage, spending large amounts of resources in preparation for this
impending future. 198

By far the world’s biggest power in terms of military expenditure and global firepower, the United States, has been
planning and investing to stay ahead of strategic competitors and secure its dominant place at the top. The
Department of Defense launched in 2018 an Artificial Intelligence Strategy to articulate its approach and
methodology, to be implemented by the newly-created Joint Artificial Intelligence Center, for accelerating the
adoption of AI enabled capabilities ‘to strengthen our military, increase the effectiveness and efficiency of our
operations, and enhance the security of the nation’. 199

One of the goals of the Pentagon is to forge partnerships with the dynamic private sector, Silicon Valley companies,
and academic institutions leading in technological research, not always keen to be associated in the public eye with
military projects. The Joint Enterprise Defense Infrastructure (JEDI), a multibillion-dollar contract meant to do
precisely that, will amass human, financial, and material resources to build a single cloud computing architecture
across military branches to connect US forces wherever they are in the world. Similarly ambitious, Project Maven
(Algorithmic Warfare Cross-Functional Team) will further develop software and related systems to automate data
collection globally by using machine learning to scan drone video footage, thus expediting the analytical process
to swiftly identify valuable targets. In fact, Maven’s software has reportedly already been used in ‘as many as six
combat locations in the Middle East and Africa’. 200

The Defense Advanced Research Projects Agency (DARPA) will allocate up to US$ 2 billion over the next five years
in AI weapons research. DARPA has already proved instrumental to spur cutting-edge research that produced
tangible results, such as the antisubmarine ship Sea Hunter, which has been tested at sea since 2016. The Sea
Hunter is thought to be the first of a whole new class of ships with trans-oceanic range, designed to autonomously
scan and detect enemy submarines, as well as carry out other military tasks with no crew on board. For now, the
experimental model does not carry armaments. 201

Although still not on a par with the United States, China is also restructuring its armed forces and vigorously
funding more projects on research and development of AI capabilities, taking advantage of the existing ‘military-
civil fusion’ that in practice makes governmental agencies and private companies act in close coordination. This
concerted effort is in line with the anticipated ‘revolution in military affairs’, which Chinese strategists wish to
respond by scaling up resources to prepare the military for the ‘intelligentization’ of warfare. There are plans to
boost autonomy for hazardous expeditionary capabilities, not only in outer space, but also in deep sea exploration
and polar missions both in the Arctic and in Antarctica. 202

198
AI makes it possible to analyze dynamic battlefield conditions in real time and strike quickly and optimally while
minimizing risks to one’s own forces.’ AI and the military: forever altering strategic stability, T4GS Reports,
Technology for Global Security and Center for Global Security Research, 13 February 2019, p. 7,
http://www.tech4gs.org/ai-and-humandecision-making.html
199
Summary of the 2018 Department of Defense Artificial Intelligence Strategy: harnessing AI to advance our security and
prosperity, Washington, DC. 2018, p. 7-8, https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/Summary-of-
DoD-AI-Strategy.pdf
200
Weaponised AI is coming, are algorithmic forever wars our future? The Guardian, 11 October 2018,
https://www.theguardian.com/commentisfree/2018/oct/11/war-jedi-algorithmic-warfare-us-military
201
The US Navy also has a demonstration aerial combat vehicle, the Northrop Grumman X-47B, which had its first flight
in 2011 and it can autonomously take off and land on aircraft carriers. Following changes in the program’s
concept, Boeing won in 2018 a contract to develop four aerial refueling drones (MQ-25A Stingray) by 2024. Navy
picks Boeing to build MQ-25A Stingray carrier-based drone, USNI News, 30 August 2018,
https://news.usni.org/2018/08/30/navy-picks-boeing-build-mq-25a-stingray-carrier-based-drone
202
Elsa Kania, Chinese military innovation in artificial intelligence, Washington, DC: Testimony before the US-China
Economic and Security Review Commission, CNAS, June 2019, p. 2-5 and 18-19,
https://www.cnas.org/publications/congressional-testimony/chinese-military-innovation-in-artificial-intelligence

33
The People’s Liberation Army ‘will likely leverage AI to enhance its future capabilities, including in intelligent and
autonomous unmanned systems; AI-enabled data fusion, information processing, and intelligence analysis; war-
gaming, simulation, and training; defense, offense, and command in information warfare; and intelligent support
to command decision-making’. 203

Chinese leaders have made it clear that they are fully aware of the challenges to overcome and will mobilize all
means regarded necessary to be at the forefront in AI technology. Despite concerns about a possible arms race in
the horizon, they mostly see ‘increased military usage of AI as inevitable’ and are ‘aggressively pursuing it’. In 2018,
China’s Ministry of National Defense established to that effect two new Beijing-based research organizations
focused upon AI and autonomous systems under the National University of Defense Technology (NUDT). 204

Russia may not have the same economic leverage the United States and China possess to finance high-tech
ventures and start-ups, but its niche expertise and overall military capabilities must be duly reckoned with. As two
scholars remarked, notwithstanding a ‘smaller private sector innovation ecosystem’, the Russian Defense Ministry
has been committing significant resources in AI, machine learning, and robotics to help their forces perform military
tasks that range from logistics to combat missions, chiefly in urban terrain, maximizing effectiveness by facilitating
navigation and maneuver, improving situational awareness, and enhancing precise targeting. 205

In practical terms, Russia tested in Syria many state-of-the-art weapons using emerging technologies, some of
them for the first time, including small autonomous vehicles for intelligence, surveillance, and reconnaissance, in
addition to a remote-controlled mine-clearing vehicle. Other existing projects include a stealth heavy combat aerial
vehicle, the Okhotnik-B, a sixth-generation aircraft under development by Sukhoi, and designs for more advanced
main battle tanks. As Sychev noted, ‘the fire control system of the next generation Russian T-14 tank, based upon
the Armata universal heavy crawler platform, will be capable of autonomously detecting targets and bombarding
them until they are completely destroyed’. 206

Israel also has autonomous or semi-autonomous weapons systems in operation, notably the Iron Dome, the anti-
missile defense system employed by the Israeli Defense Forces to intercept short-range projectiles, particularly in
areas adjacent to the Gaza Strip. The Protector and the Silver Marlin are both autonomous surface vehicles
manufactured by an Israeli company for maritime patrol missions. Israel has also been developing the Carmel
Program to upgrade its ground forces and put into operation fast-moving armored vehicles, equipped with multiple
sensors and AI technology to detect threats and improve target engagement, weapon system management, and
automatic maneuvering.207

While the list of examples could well be extended, depending on the greater or lesser level of autonomy for each
weapons system, most states in the Global South are far from any comparable build-up. This is true for other than
military applications.

203
Elsa Kania, Battlefield singularity: artificial intelligence, military revolution, and China’s future military power,
Washington, DC: CNAS, November 2017, p. 4, https://www.cnas.org/publications/reports/battlefield-singularity-
artificial-intelligence-military-revolution-and-chinas-future-military-power
204
Gregory C. Allen, Understanding China’s AI strategy: clues to Chinese strategic thinking on artificial intelligence
and national security, Washington, DC: CNAS, February 2019, p. 5-7,
https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy
205
Concurrently, the Russian defense sector is actually working on an unmanned ground vehicle designed specifically to
withstand tough urban combat conditions. The project called Shturm (Storm) is based on the T-72 tank chassis and
features specific defensive technologies and offensive armaments for a city fight’. Margarita Konaev and Samuel
Bendett, Russian AI-enabled combat: coming to a city near you? War on the Rocks, commentary, Texas National
Security Review, 31 July 2019, https://warontherocks.com/2019/07/russian-aienabled-combat-coming-to-a-city-near-
you
206
Vasily Sychev, The threat of killer robots, The UNESCO Courier, n. 3, Artificial intelligence: the promises and the
threats, Paris, 2018, p. 28, https://unesdoc.unesco.org/ark:/48223/pf0000265211
207
Israel unveils new ground forces concept: fast & very high tech, Breaking Defense, 5 August 2019,
https://breakingdefense.com/2019/08/israel-unveils-new-ground-forces-concept-fast-very-high-tech

34
In the long run, AI-driven industry automation in rich countries of repetitive, labor-intensive jobs, can arguably
displace traditional, comparative advantages of developing countries, such as cheap workforce and raw materials.
The level of AI readiness will have a dramatic effect upon competitiveness and is likely to be a critical factor in
investments and GDP growth. The widening gap in prosperity and wealth would mostly affect those countries
unable to develop digital skills and infrastructure to reap the rewards of AI opportunities in business performance,
productivity, and innovation. 208 If not long ago global inequality was gauged in terms of have and have-not
countries, a new divide could be emerging between AI-ready and not-ready. 209

The militarization of AI is unstoppable, as is AI governance: technical standards, performance metrics, norms,


policies, institutions, and other governance tools will be adopted sooner or later. 210 One should expect more calls
for domestic legislation on civilian and commercial applications in many countries, in view of the all-encompassing
legal, ethical, and societal implications of these technologies. In international affairs, voluntary, soft law, or binding
regulations can vary from confidence-building measures, gentlemen’s agreements, and codes of conduct, including
no first-use policies, to multilateral political commitments, regimes, normative mechanisms, and formal
international treaties.

Why is AI governance needed? To many, a do-nothing policy is hardly an option. In a normative vacuum, the
practice of states may push for a tacit acceptance of what is considered ‘appropriate’ from an exclusively military
point of view, regardless of considerations based upon other views on law and ethics.

Take, for that matter, a scenario in which nothing is done and AI gets weaponized anyway. Perceived hostility
increases distrust among great powers and even more investments are channeled to defense budgets. Blaming
each other will not help, since unfettered and armed AI is available in whatever form and shape to friend and foe
alike. The logic of confrontation turns into a self-fulfilling prophecy. If left unchecked, the holy grail of this arms
race may end up becoming a relentless quest for artificial general intelligence, a challenging prospect for the future
of humanity, which is already raising fears in some quarters of an existential risk looming large. 211

AI risk, safety and security may still seem disconnected from the day-by-day reality of many developing countries
struggling with more urgent problems related to poverty, hunger, violence, underdevelopment, or environmental
degradation, to name just a few. Even so, this detachment will not insulate them from consequences, unintended
or not, of damaging situations originated elsewhere, as happens to be the case of climate change and vulnerable
small islands facing extreme flooding. In armed conflict, disruptive changes can alter the correlation of forces very
quickly. Even small improvements in speed and accuracy could purportedly result in disproportionate tactical
advantages on the ground. Strategic gains obtained through AI applications are expected to be unevenly distributed
and primarily favor those countries ahead in research and development. Catching up may prove too hard to achieve
or come too late to be of any serious significance. 212

The JAIC Rolls Out DoD’s Ethical Principles for Artificial Intelligence

On February 24, 2020, following Secretary of Defense Mark Esper’s call on the private sector to work with the
Department of Defense (DoD) to develop principles for using Artificial Intelligence (AI) in a “lawful and ethical
manner,” the DoD announced its adoption of ethical principles for AI. The DoD Joint Artificial Intelligence Center

208
Cf. Digital economy report 2019, UNCTAD, 2019, https://unctad.org/en (access 6 September 2019); James Manyika
and Jacques Bughin, The promise and challenge of the age of artificial intelligence, McKinsey Global Institute, 2018, p.
4, https://www.mckinsey.com/featured-insights/artificial-intelligence
209
Garcia, Eugenio V, The Militarization of Artificial Intelligence: A Wake-Up Call for the Global South (September 10,
2019). Available at SSRN: https://ssrn.com/abstract=3452323 or http://dx.doi.org/10.2139/ssrn.3452323
210
Ryan Calo, Artificial intelligence policy: a primer and roadmap, 8 August
2017,https://www.ssrn.com/abstract=3015350
211
For an overview of ongoing AGI projects, cf. Seth D. Baum, A survey of artificial general intelligence projects for ethics,
risk, and policy, Global Catastrophic Risk Institute, Working Paper 17-1, November
2017,https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741
212
Garcia, Eugenio V, The Militarization of Artificial Intelligence: A Wake-Up Call for the Global South (September 10,
2019). Available at SSRN: https://ssrn.com/abstract=3452323 or http://dx.doi.org/10.2139/ssrn.3452323

35
(JAIC) is tasked with coordinating the implementation of these ethical principles for the Department. The ethical
principles apply to both combat and non-combat functions and address five major areas, including (1) responsibility;
(2) equitability; (3) traceability; (4) reliability; and (5) governability. 213

The Department of Defense adopted its Ethical Principles for Artificial Intelligence in February 2020, a first for any
military organization. These principles build on the foundational work performed by the Defense Innovation Board
and is tied directly to one of the pillars of the DoD AI Strategy: Leading in military ethics and safety. 214The Joint
Artificial Intelligence Center (“ The JAIC”) serves as the Department’s lead for coordinating the oversight and
implementation of these principles.

Alka Patel, head of AI Ethics Policy for the JAIC, focuses on how to operationalize the five DoD AI Ethics Principles
(Responsible, Equitable, Traceable, Reliable and Governable) and put them into practice in the design, development,
deployment, and use of AI-enabled capabilities. To operationalize these principles throughout the DoD, the JAIC is
turning to Responsible AI – an enterprise-wide framework that provides the DoD workforce and the American
public the confidence that DoD AI-enabled systems will be safe and reliable, and will adhere to ethical standards.

Getting this done is not a simple exercise of writing and checking a “to do” list. “When we think about
operationalizing these principles, we must do it from the lens of a broader Responsible AI framework,” says Patel.
“Many of the five principles are rooted in sound engineering practices across the AI product lifecycle. However, we
must also look at operating structures and organizational culture, such as increasing ‘Responsible AI’ literacy across
the workforce and governance structures for escalation and decisioning.” 215

On January 1, 2021, the 116th Congress enacted the Fiscal Year 2021 National Defense Authorization Act (NDAA),
which established, among other things, the National Artificial Intelligence Initiative Act of 2020, 216a program to
award financial assistance to National Artificial Intelligence Research Institutes 217 and provided acquisition
authority to the Joint Artificial Intelligence Center (JAIC). The NDAA’s focus on artificial intelligence (AI) is yet
another demonstration of the Federal Government’s priority of and commitment to ensuring American dominance
in AI, among other emerging technologies. 218

The Initiative endeavors to ensure that, for the next ten years, the Federal Government focuses on using
“trustworthy” AI systems; preparing the U.S. workforce for the use of AI systems across all sectors of the economy
and society; and coordinating AI research, development, and demonstration activities amongst the civilian, defense,
and intelligence agencies and communities. To carry out this purpose, the Initiative highlights the need for grants
and cooperative agreements for AI research and development, collaboration with industry and other diverse
stakeholders, and leveraging existing federal investments to advance the Initiative’s objectives. In support of these
activities, the Initiative creates a National Artificial Intelligence Initiative Office (NAIIO). The NAIIO will serve as
the point of contact for federal AI activities and aims to regularly work with industry and the public.

213
The JAIC, A Closer Look: The Department of Defense AI Ethical Principles (Feb. 24, 2020
https://www.ai.mil/blog_02_24_20-dod-ai_principles.html; See also Lauren C. Williams, JAIC to Expand Ethics
Team, FCW and Defense Systems, (Aug. 12, 2020) (The DOD adopted five ethics principles in February 2020
and hired Alka Patel as the JAIC's chief ethicist. The JAIC also launched a "Responsible AI Champions" pilot
earlier this year to
train personnel across disciplines on the ethical use of AI.)
214
https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF
215
JAIC Public Affairs, The AI Ethics Journey Will Hit New Heights in 2021, (Jan. 5, 2021)
https://www.ai.mil/blog_01_05_21-the_ai_ethics_journey_will_hit_new_heights_in_2021.html
216
Division E, Title LI, §§ 5101-5106) (hereinafter “the Initiative”.
217
Division E, Title LII, § 5201.
218
Jonathan M. Baker, Adelicia R. Cliffe, Kate M. Growley, CIPP/G, CIPP/US, Laura J. Mitchell Baker & Michelle
Coleman, Advancing America’s Dominance in AI: The 2021 National Defense Authorization Act’s AI
Developments, (Jan, 14, 2021) https://www.governmentcontractslegalforum.com/2021/01/articles/legal-
developments/advancing-americas-dominance-in-ai-the-2021-national-defense-authorization-acts-ai-
developments/

36
The Initiative also created an Interagency Committee that will coordinate federal programs and activities and
develop a strategic plan with goals, priorities, and metrics to evaluate how federal agencies will carry out the
Initiative. The Interagency Committee will expect federal agencies to: (1) prioritize federal leadership and
investment in AI research, development, and demonstration; (2) support interdisciplinary AI research, development,
demonstration, and education with long-term funding; (3) support research and activities focused on AI ethical,
legal, environmental, safety, security, bias, and other societal issues that the use of AI may implicate; (4) ensure
the availability of “curated, standardized, secure, representative, aggregate, and privacy-protected data sets for
artificial intelligence research and development;” (5) ensure AI research and development has the necessary
computing, networking, and data facilities; (6) coordinate and support both federal AI education and work force
training activities; and (7) support and coordinate the network of AI research institutes established in Section
5201(b) of the NDAA.

Similarly, the Initiative establishes the National Artificial Intelligence Advisory Committee (NAIAC). The NAIAC will
consist of individuals from industry, academic institutions, nonprofit and civil societies, and federal laboratories;
these individuals will be appointed to the NAIAC by the Secretary of Commerce. The NAIAC will advise the President
and the NAIIO on several items, including providing recommendations on the United States’ competitiveness in AI,
progress being made through the Initiative, and regulatory or nonregulatory oversight of AI systems.

In addition, the Initiative calls for: (1) the establishment of a Subcommittee on Artificial Intelligence and Law
Enforcement; (2) a study on the impact of AI on the workforce, and (3) the establishment of a task force to address
the feasibility of establishing and maintaining a National Artificial Intelligence Research Resource. The
Subcommittee on Artificial Intelligence and Law Enforcement will provide advice on: (a) bias in the use of AI
systems in law enforcement; (b) law enforcement’s access to data and the security parameters for that data; (c)
the ability for the Federal Government and industry to take advantage of AI systems for security or law enforcement
purposes; and (d) legal standards to ensure that the use of AI systems is consistent with privacy, civil rights and
liberties, and disability laws and rights.

The NDAA also established a program that would allow the Secretaries of Energy and Commerce, and the Director
of the National Science Foundation, to award financial assistance to the National Artificial Intelligence Research
Institutes program. Part of the National Science Foundation, this program enables longer-term research and U.S.
leadership in AI through the creation of AI Research Institutes, which are focused on a particular economic or
social sector and either address ethical, societal, safety, and security implications for the use of AI in that sector or
focus on cross-cutting challenges for AI systems. There are additional requirements for AI Research Institutes,
including, for example, that they must have partnerships across public and private organizations and have the
ability to create an innovation ecosystem that can translate the Institute’s research into applications and products.
Among other things, the AI Research Institutes may use the financial assistance for developing testbeds for AI
systems; managing and making curated, standardized, secure, and privacy protected data sets from the public and
private sectors available for purposes of testing and training AI systems; and performing research and providing
education activities that involve AI systems to solve social, economic, health, scientific, and national security
challenges. Financial assistance may be provided for an initial five-year period, with a potential five-year extension
based on a merit-review. The NDAA does not provide specific details on how to seek assistance—only that
applications should be submitted to an agency head “at such time, in such manner, and containing such information
as the agency head may require.”

Last, the NDAA included acquisition authority for the JAIC. Specifically, the NDAA grants the JAIC Director
acquisition authority of up to $75 million for new contracts for each year through FY2025. To support the JAIC
Director, the NDAA requires the Department of Defense to provide the JAIC with full-time personnel to execute
acquisitions and support program management, cost analysis, and other essential procurement functions.

These are welcomed developments in the world of AI. The NDAA affords industry a number of opportunities to get
involved in shaping the Federal Government’s use of AI systems, whether through participating in Committees,
influencing the regulatory and/or nonregulatory schemes that may be applied to public and private use of AI
systems, or researching and developing AI systems with the support of federal financial assistance.

On May 26, 2021, U.S. Deputy Defense Secretary Kathleen Hicks, in a department wide memo enumerated
foundational tenets for responsible AI, reaffirmed the ethical AI principles the department adopted last year, and

37
mandated the JAIC director, Lieutenant General Michael S. Groen, 219 start work on four activities for developing a
responsible AI ecosystem. The five ethical AI principles are responsible, equitable, traceable, reliable, and
governable.220

The Joint Artificial Intelligence Center will lead implementation of responsible AI across the Defense Department,
according to the new directive. 221 “As the DoD embraces artificial intelligence (AI), it is imperative that we adopt
responsible behavior, processes, and outcomes in a manner that reflects the Department's commitment to its
ethical principles, including the protection of privacy and civil liberties,” Hicks said in the memo, which was
announced June 1, 2021. “A trusted ecosystem not only enhances our military capabilities, but also builds
confidence with end-users, warfighters, and the American public.”

Hicks assigned the JAIC director to coordinate responsible AI through a working council, which must produce a
strategy and implementation pathway, create a talent management framework, and report on how responsible AI
can be integrated into acquisitions. A list of designated representatives from the military departments, Joint Staff
and various other DOD offices who will serve on the working council was due 14 days after the memo was signed
on May 26, 2021.

On top of the five ethical AI principles—responsible, equitable, traceable, reliable, governable—Hicks added six
tenets to guide implementation of responsible AI, or RAI. 222 The guidance requires disciplined governance,
warfighter trust, a systems engineering and risk management approach to AI acquisitions throughout the product
lifecycle, incorporation of RAI principles in requirements, a “robust national and global RAI ecosystem,” and a
workforce educated on RAI. 223

According to a January 2021 JAIC blog post, the center has already begun work around responsible AI. The JAIC
already launched a DOD-wide Responsible AI Subcommittee in March 2020, the post notes, and “this diverse group
of approximately 50 individuals representing all major components of the DoD has been meeting monthly
throughout the year on a variety of efforts associated with policy and governance processes.”

The post also indicated the JAIC has been working on integrating responsible AI into solicitations as well as looking
for ways to develop responsible AI tools, such as harms analysis and risk assessment into the architecture of the
Joint Common Foundation (“JCF”). The JCF, a cloud-enabled platform for speeding up of AI capabilities, is a key
part of the center’s “JAIC 2.0” effort, which leaders say will transition work away from building products and toward
activities that enable DOD to be ready for AI tech.

The US Department of Defense has made real progress in the development and adoption of ethical principles in
artificial intelligence. The challenge now is establishing a framework for how AI ethics can be institutionalized
across the DoD’s myriad components and missions. 224

219
https://www.ai.mil/bio_groen.html
220
https://media.defense.gov/2021/May/27/2002730593/-1/-1/0/IMPLEMENTING-RESPONSIBLE-ARTIFICIAL-
INTELLIGENCE-IN-THE-DEPARTMENT-OF-DEFENSE.PDF
221
Id.
222
The Responsible Artificial Intelligence Institute works with the JAIC on RAI. (“RAI is a non-profit organization
building tangible governance tools for trustworthy, safe, and fair Artificial Intelligence (AI). Through a first-of-
its-kind certification system that qualifies AI systems, we support practitioners as they navigate the complex
landscape of creating Responsible AI. Feedback generated from these systems will in turn inform AI
policymakers, enabling technologies that improve the social and economic well-being of society. RAI brings
extensive experience in responsible AI policy and is uniquely positioned to partner with organizations across
public and private sectors to guide and inform responsible AI governance around the world.
https://www.responsible.ai/about
223
Id.
224
Megan Lamberth, AI Ethical Principles: Implementing US Military’s Framework, RSIS (Oct. 28, 2020)
https://www.rsis.edu.sg/wp-content/uploads/2020/10/CO20186.pdf

38
The adoption of the ethical AI principles was a notable success for the Defense Department. The principles serve
as a signal to private tech companies and US allies that the DoD cares not only about the adoption of new AI
capabilities, but how those capabilities are used.

AI is an enabling technology with potential use cases across warfighting, command and control, logistics,
maintenance, and more. The principles had to be broad enough to encompass future AI use, as well as AI
applications that are already in widespread use throughout the Defense Department. As former director of the
JAIC, Lt. General Shanahan, explained: “Tech advances, tech evolves…the last thing we wanted to do was put
handcuffs on the department to say what we could and could not do.”

The JAIC’s job now is to transform these principles into actionable guidance that is malleable enough to address a
variety of applications, but concrete enough to facilitate meaningful action across the Department. The JAIC has
launched two new initiatives that are working to develop frameworks for ethical AI and a “shared vocabulary” for
all DoD personnel working on AI. The Responsible AI Subcommittee — part of the DoD’s broader working group
on AI — is an interdisciplinary group composed of DoD representatives working to establish the necessary ethical
frameworks for the acquisition and implementation of AI. The Responsible AI Champions pilot, a program designed
to train DoD personnel on the importance of ethics in AI, is exploring how the Department’s ethical principles can
be actualized across the lifecycle of an AI system.

Bringing in Like-Minded Countries

In addition to increasing the JAIC’s authorities and resources, the DoD continues seeking buy-in and participation
from allied and partner countries, many of whom are wrestling with similar challenges of incorporating ethics into
AI development. The JAIC’s partnership with Singapore’s Defense Science and Technology Agency (DSTA) is
illustrative of the benefits of collaboration on AI development.
Last year, the two countries held a multi-day series of exchanges centered on the use of AI for disaster and
humanitarian relief. Earlier this year, the JAIC announced the first “AI Partnership for Defense” — an initiative aimed
at bringing like-minded countries together to share best practices on incorporating ethics into AI systems. 225

These kinds of partnerships are critical, particularly as the geostrategic competition between the US and China
continues to evolve. 226Close collaborations, like the partnership between the JAIC and DSTA or the AI Partnership
for Defense, allow for greater interoperability between militaries, access to global datasets, and the sharing of best
practices to ensure the development and fielding of responsible AI. Ultimately, these partnerships will benefit the
security of not only the US and allied countries in the Indo-Pacific region, but countries around the globe.

The obstacles obstructing the process of implementation will not be resolved overnight. The Defense Department
will continue to reckon with how to ensure the security and reliability of its AI systems as it continues to develop
and field new capabilities. The JAIC’s mission will also lead the Department’s guidance on ethics and AI is clear
and widely understood, so that DoD personnel from logistics to warfighting to intelligence know not only what
ethical AI is, but how it should be applied.

A blog post published on the JAIC’s site highlights some of the efforts initiated to help ensure Pentagon-centered,
AI-enabled work adheres to ethical standards, and that coming capabilities are embedded with them. The
publication notes how it is educating its workforce who will be designing, developing, procuring and using AI
capabilities about the DOD AI Ethics Principles.

“Inculcating Responsible AI across the DoD positions our military to establish norms for responsible design,
development, and use of AI within the framework of our democratic values. It enables us to earn the trust of the
American public, industry, and the broader AI community to sustain our technological edge and along the way,
fortifies our international partnerships with allies that share our values. All of this furthers the DoD’s mission to

225
https://www.ai.mil/news_09_16_20-jaic_facilitates_first-ever_international_ai_dialogue_for_defense_.html
226
Kop, Mauritz, Democratic Countries Should Form a Strategic Tech Alliance (March 14, 2021). Stanford -
Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford
University, Issue No. 1/2021, Available at SSRN: https://ssrn.com/abstract=3814409

39
strengthen national security and increase mission effectiveness,” said Alka Patel, lead for Responsible AI at the
JAIC.227

The adoption of the Department of Defense’s Ethical Principles for Artificial Intelligence was a crucial step in a long
journey. Moving forward, the JAIC will continue to lead, coordinate, and collaborate with DoD partners to set the
conditions for an ethical AI culture. Some areas of focus include increasing Responsible AI literacy across the DoD’s
workforce and integrating Responsible AI processes. The JAIC will also continue to work across government, with
the American tech industry and academia, and with allies to advance dialogue and cooperation on AI ethical
principles. 228

“[O]ur primary focus is on creating awareness of the DOD AI Ethics Principles and increasing awareness of
Responsible AI literacy across the enterprise,” said Patel.229 “A year or so ago I was involved with an organization
like an incubator… and we actually start at that point started having these conversations...then they realized oh
we have to address AI ethics and we need to go back and restructure and figure out our processes.” 230

The JAIC’s ethical AI initiative is so important because AI and machine learning are rapidly entering the arena of
modern warfare. This trend presents highly complex challenges for policymakers, lawyers, scientists, ethicists, and
military planners, and also for society itself. The political will to regulate AI depends on the interest(s) and
preferences of States, especially with regard to economic goals and security issues, as in most societies there is
broad consensus that economic growth of the national economy is a prime aim and providing national security is
the most important legitimate goal of a State. This explains why there is at the international level there is no
consensus how to regulate AI systems as a regulation is viewed as a limiting force for economic growth and/or
national security.

Some militaries are already far advanced in automating everything from personnel systems and equipment
maintenance to the deployment of surveillance drones and robots. Israel has deployed defensive systems, the Iron
Dome, that can stop incoming missiles or torpedoes faster than a human could react. International law does not
directly address intelligent defense systems (IDSs), of which Israel’s Iron Dome embodies the most successful
implementation to date. 231

This is obvious with regard to (semi-)autonomous weapons.232 Though a Group of Governmental Experts (GGE)
on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS) was established in 2016
and has met in Geneva since 2017 convened through the Conference on Certain Conventional Weapons (CCW)
and a report of the 2019 session of the GGE is published 233 there are only guiding principles affirmed by the Group.

227
https://www.ai.mil/blog_02_26_21-ai_ethics_principles-
highlighting_the_progress_and_future_of_responsible_ai.html
228
https://www.ai.mil/blog_02_26_21-ai_ethics_principles-
highlighting_the_progress_and_future_of_responsible_ai.html
229
Brandi Vincent, How the Pentagon’s AI Center Aims to Advance ‘Responsible AI Literacy’ in 2021, Nextgov,
(Jan. 12, 2021) https://www.nextgov.com/emerging-tech/2021/01/how-pentagons-ai-center-aims-advance-
responsible-ai-literacy-2021/171327/
230
ATARC Ethics and Responsible AI: JAIC and For Humanity (Apr 6, 2021) available at
https://www.youtube.com/watch?v=Ff30fWMWmxI&t=2940s
231
Richemond-Barak, Daphné and Feinberg, Ayal, The Irony of the Iron Dome: Intelligent Defense Systems,
Law, and Security (November 1, 2015). 7 Harvard National Security Journal 469 (2016), Available at SSRN:
https://ssrn.com/abstract=2685858
232
Voeneky, Silja, Key Elements of Responsible Artificial Intelligence - Disruptive Technologies and Human Rights
(January 1, 2020). Freiburger Informationspapiere, January 2020, Available at SSRN:
https://ssrn.com/abstract=3515224
233
Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be
Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (CCW), Group of Governmental
Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, Geneva, 25.–
29.03.2019 and 20.–21.08.2019, Report of the 2019 session, CCW/GGW.1/2019/3, 25.09.2019, available at:
https://undocs.org/en/CCW/GGE.1/2019/3.

40
234
These guiding principles stress, inter alia, the need for accountability, 235and risk assessment measures as part
of the design (lit. g). However, there is no agreement on a meaningful international treaty, and it is still disputed
whether the discussion within the GGE should be limited to fully autonomous systems. 236

Capabilities in the pipeline, such as the U.S. Defense Department’s Project Maven, 237seek to apply computer
algorithms to quickly identify objects of interest to warfighters and analysts from the mass of incoming data based
on “biologically inspired neural networks.” Applying such machine learning techniques to warfare prompted an
outcry from over 3,000 employees of Google, which had partnered with the Department of Defense on the project,
but failed to renew the contract after the protest, as will be discussed below.

These latest trends are intensifying an international debate on the development of weapons systems that could
have fully autonomous capability to target and deploy lethal force—in other words, to target and attack in a dynamic
environment without human control. The question for many legal and ethical experts is whether and how such
fully autonomous weapons systems can comply with the rules of international humanitarian law and human rights
law. 238

The AI Crisis of Conscience and Emerging AI Ethics

The prospect of developing mind-blowing AI, including for fully autonomous weapons, is no longer a matter of
science fiction and is already fueling a new global arms race. President Putin famously told Russian students that
“whoever becomes the leader in this sphere [of artificial intelligence] will become the ruler of the world.” 239China
is racing ahead with an announced pledge to invest $150 billion in the next few years to ensure it becomes the
world’s leading “innovation centre for AI” by 2030. 240

Even though AI increasingly is penetrating commercial, military, and scientific arenas, states have been slow to
create new international agreements or to amend existing ones to catch up to these technological developments.
Nevertheless, AI is certain to produce changes in the areas of human rights, the use of force, transnational law
enforcement, global health, intellectual property regimes, and international labor law, among others. 241

Tech AI Ethics

234
Id. Annex IV, 13 et seq.
235
Annex IV: (b) “Human responsibility for decisions on the use of weapons systems must be retained since
accountability cannot be transferred to machines. This should be considered across the entire life cycle of the
weapons system; (…) (d) Accountability for developing, deploying and using any emerging weapons system
in the framework of the CCW must be ensured in accordance with applicable international law, including
through the operation of such systems within a responsible chain of human command and control;”
236
For this view and a definition, see working paper (WP) submitted by the Russian Federation,
CCW/GGE.1/2019/WP.1, 15.03.2019, para. 5: “unmanned technical means other than ordnance that are
intended for carrying out combat and support missions without any involvement of the operator", expressly
excluding unmanned aerial vehicles as highly automated systems.
237
CHERYL PELLERIN, DOD NEWS, Project Maven to Deploy Computer Algorithms to War Zone by Year’s End,
July 21, 2017) https://www.defense.gov/Explore/News/Article/Article/1254719/project-maven-to-deploy-
computer-algorithms-to-war-zone-by-years-end/’ See also Kelsey Atherton, “Targeting the Future of the
DoD’s Controversial Project Maven Initiative,” C4ISRNET, December 17, 2018 (www.c4isrnet.com/it-
networks/2018/07/27/targeting-the-future-of-the-dods-controversial-project-maven-initiative/).
238
https://www.brookings.edu/events/autonomous-weapons-and-international-law/
239
Tom Simonite, "For Superpowers, Artificial Intelligence Fuels New Global Arms Race," Wired, September 8,
2017, https://www.wired.com/story/for-superpowers-artificial-intelligence-fuelsnew-global-arms-race/
240
Meng Jing and Xie Yu, “China aims to outspend the world in artificial intelligence, and Xi Jinping just green lit
the plan,” South China Morning Post, October 18, 2017, https://www.scmp.com/business/china-
business/article/2115935/chinas-xi-jinping-highlights-ai-big-data-and-shared-economy.
241
Deeks, Ashley, Introduction to the Symposium: How Will Artificial Intelligence Affect International Law? (April
27, 2020). 114 AJIL Unbound 138 (2020), Virginia Public Law and Legal Theory Research Paper No. 2020-38,
Available at SSRN: https://ssrn.com/abstract=3586988

41
If digital technology production in the early days was characterized by the brash spirit of Facebook’s motto “move
fast and break things” and the superficial assurances of Google’s motto “don’t be evil,” digital technology toward
the end of the decade was characterized by a “crisis of conscience” about these and other technologies’ perils. 242

Recent controversies related to topics such as fake news, privacy, and algorithmic bias have prompted the
associated “techlash” and increased public scrutiny of digital technologies and soul-searching among many of the
people associated with their development. In response, the tech industry, academia, civil society, and governments
have rapidly increased their attention to “ethics” in the design and use of digital technologies (“tech ethics”). 243

The most pervasive treatment of tech ethics within tech companies has come in the form of ethics principles and
ethics oversight bodies. Companies like Microsoft, Google, and IBM have developed and publicly shared AI ethics
principles, which include statements such as “AI systems should treat all people fairly” and “AI should […] Be
socially beneficial”. 244 These principles are often supported through dedicated ethics teams and advisory boards
within companies, with such bodies in place at companies including Microsoft, Google, Facebook, Alphabet
subsidiary DeepMind, and policing technology company Axon. 245 Companies such as Google, Accenture, and
Clifford Chance have also begun offering tech ethics consulting services. 246

As part of these efforts, the tech industry has formed several coalitions aimed at advancing a common dialogue
about safe and ethical artificial intelligence. In 2015, Elon Musk and Sam Altman (the then-president of the tech
incubator Y Combinator) created OpenAI, an independent research organization that aims to develop socially
beneficial artificial intelligence and mitigate the “existential threat” presented by AI, with more than $1 billion in
donations from major tech executives and companies. 247A year later, Amazon, Facebook, DeepMind, IBM, and
Microsoft founded the Partnership on AI (PAI),248 a nonprofit coalition to shape best practices in AI development,
advance public understanding of AI, and support socially beneficial applications of AI. 249

242
Marantz, Andrew. 2019. Silicon Valley’s Crisis of Conscience. The New Yorker.
https://www.newyorker.com/magazine/2019/08/26/silicon-valleys-crisis-of-conscience.
243
Green, Ben, The Contestation of Tech Ethics: A Sociotechnical Approach to Ethics and Technology in Action
(June 3, 2021). Available at SSRN: https://ssrn.com/abstract=3859358
244
IBM. 2018. Everyday Ethics for Artificial Intelligence.
https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf; Microsoft. 2018. Microsoft AI principles.
https://www.microsoft.com/en-us/ai/responsible-ai.; Pichai, Sundar. 2018. AI at Google: our principles.
https://www.blog.google/technology/ai/aiprinciples/.
245
Legassick, Sean, and Verity Harding. 2017. Why we launched DeepMind Ethics & Society. DeepMind Blog.
https://deepmind.com/blog/announcements/why-we-launcheddeepmind-ethics-society; Nadella, Satya.
2018. Embracing our future: Intelligent Cloud and Intelligent Edge. Microsoft News Center.
https://news.microsoft.com/2018/03/29/satya-nadella-email-to-employeesembracing-our-future-intelligent-
cloud-and-intelligent-edge/; Novet, Jordan. 2018. Facebook forms a special ethics team to prevent bias in its
A.I. software.CNBC. https://www.cnbc.com/2018/05/03/facebook-ethics-team-prevents-bias-in-
aisoftware.html; Vincent, James, and Russell Brandom. 2018. Axon launches AI ethics board to study the
dangers of facial recognition. The Verge. https://www.theverge.com/2018/4/26/17285034/axon-aiethics-
board-facial-recognition-racial-bias; Walker, Kent. 2018. Google AI Principles updates, six months in. The
Keyword. https://www.blog.google/technology/ai/google-ai-principles-updates-six-months/
246
Simonite, Tom. 2020. Google Offers to Help Others With the Tricky Ethics of AI.
Wired.https://www.wired.com/story/google-help-others-tricky-ethics-ai/; Clifford Chance. N.d. Tech Group.
https://www.cliffordchance.com/hubs/tech-group-hub/techgroup.html; and Accenture. N.d. AI Ethics &
Governance. https://www.accenture.com/us-en/services/appliedintelligence/ai-ethics-governance.
247
Dowd, Maureen. 2017. Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse. Vanity Fair.
https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-tostop-ai-space-x
248
https://www.partnershiponai.org/
249
Finley, Klint. 2016. Tech Giants Team Up to Keep AI From Getting Out of Hand. Wired.
https://www.wired.com/2016/09/google-facebook-microsoft-tackle-ethics-ai/.

42
Companies and their stakeholder need practical tools and implementation guidelines besides abstract frameworks
to kick off the realization of AI ethics. 250How is it possible to decrease the entry level barrier to kickoff AI ethics
implementation?

Academia and AI Ethics

Computer and information science programs at universities have rapidly increased their emphasis on ethics training
in curricula. While some universities have taught computing ethics courses (within both computer science and
other fields) for many years 251 the emphasis on ethics within computing education has increased dramatically in
recent years. 252 When University of Colorado, Boulder information science professor Casey Fiesler tweeted a link
to an editable spreadsheet of tech ethics classes in November 2017, it quickly grew to a crowdsourced list of more
than 200 courses. 253 This plethora of courses represents a dramatic shift in computer science training and culture,
with ethics becoming a popular topic of discussion and study after being largely ignored by the mainstream of the
field just a few years prior.

Research in computer science and related fields has also become increasingly focused on the ethics and social
impacts of computing. This trend is observable in the recent explosion of conferences and workshops related to
computing ethics. The ACM Conference on Fairness, Accountability, and Transparency (FAccT, formerly FAT*) 254
and the AAAI/ACM Conference on AI, Ethics, and Society (AIES) 255 both held their first annual meetings in
February 2018 and have since grown rapidly. Through 2019, there had been more than 30 workshops related to
fairness and ethics at major computer science conferences. 256

Many universities have supported these curricular and research efforts through the creation of new institutes
focused on the social implications of technology. 2017 alone saw the launch of the AI Now Institute at NYU 257, the

250
Ryan, M., Stahl, B.C. Artificial intelligence ethics guidelines for developers and users: clarifying their content
and normative implications. Journal of Information, Communication and Ethics in Society.
https://www.emerald.com/insight/content/doi/10.1108/JICES-12-2019-0138/full/pdf (“The purpose of this
paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail.
There is a significant amount of research into the ethical consequences of artificial intelligence (AI). This is
reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide
guidance to particular stakeholder groups. It has recently been shown that there is a large degree of
convergence in terms of the principles upon which these guidance documents are based. Despite this
convergence, it is not always clear how these principles are to be translated into practice.”)
251
Grosz, Barbara J., David Gray Grant, Kate Vredenburgh, Jeff Behrends, Lily Hu, Alison Simmons, and Jim
Waldo. 2019. "Embedded EthiCS: Integrating Ethics Broadly Across Computer Science Education."
Communications of the ACM 62 (8):54-61; Reich, Rob, Mehran Sahami, Jeremy M. Weinstein, and Hilary
Cohen. 2020. "Teaching Computer Ethics: A Deeply Multidisciplinary Approach." Proceedings of the 51st
ACM Technical Symposium on Computer Science Education, Portland, OR, USA; and Shilton, Katie,
Michael Zimmer, Casey Fiesler, Arvind Narayanan, Jake Metcalf, Matthew Bietz,and Jessica Vitak. 2017. We’re
Awake — But We’re Not At the Wheel. PERVADE:Pervasive Data Ethics. https://medium.com/pervade-
team/were-awake-but-we-re-not-atthe-wheel-7f0a7193e9d5.
252
Fiesler, Casey, Natalie Garrett, and Nathan Beard. 2020. "What Do We Teach When We Teach Tech Ethics? A
Syllabi Analysis." The 51st ACM Technical Symposium on Computer Science Education (SIGCSE ’20).
253
Fiesler, Casey. 2018 Tech Ethics Curricula: A Collection of Syllabi. https://medium.com/@cfiesler/tech-ethics-
curricula-a-collection-of-syllabi3eedfb76be18 and Fiesler, Casey. 2018. What Our Tech Ethics Crisis Says
About the State of Computer Science Education. How We Get To Next. https://howwegettonext.com/what-
our-tech-ethics-crisissays-about-the-state-of-computer-science-education-a6a5544e1da6.
254
https://facctconference.org/
255
https://www.aies-conference.com/2021/
256
ACM FAccT Conference, 2020 https://facctconference.org/2020/.
257
AI Now Institute. 2017. The AI Now Institute Launches at NYU to Examine the Social Effects of Artificial
Intelligence. https://ainowinstitute.org/press-release-ai-now-launch. AI Now produces interdisciplinary
research to help ensure that AI systems are accountable to the communities and contexts they are meant to
serve, and that they are applied in ways that promote justice and equity. The Institute’s current research

43
Princeton Dialogues on AI and Ethics 258, and the MIT/Harvard Ethics and Governance of Artificial Intelligence
Initiative. 259More recently formed centers include the MIT College of Computing 260; the Stanford Institute for
Human-Centered Artificial Intelligence; 261and the University of Michigan Center of Ethics, Society, and Computing.
262

Civil Society and AI Ethics

Numerous civil society organizations have also coalesced around tech ethics, with strategies that include
grantmaking and developing principles and toolkits. Organizations such as the MacArthur and Ford Foundations
have begun exploring and making grants in tech ethics. 263 For instance, the Omidyar Network, Mozilla Foundation,
Schmidt Futures, and Craig Newmark Philanthropies partnered on the Responsible Computer Science Challenge,
which awarded $3.5 million between 2018 and 2020 to support efforts to embed ethics into undergraduate
computer science education. 264Many other foundations also contribute to the research, conferences, and institutes,
and are likely to grow.

Other organizations have been created or have expanded their scope to consider the implications and governance
of digital technologies. Organizations such as Data & Society, 265 Upturn266, the Center for Humane Technology 267,
and Tactical Tech 268 study the social implications of technology and advocate for improved technology governance
and design practices.

Many advocates call for engineers to follow an ethical oath modeled after the Hippocratic Oath, an ethical oath
taken by physicians. 269 In 2018, for instance, the organization Data for Democracy partnered with Bloomberg and
the data platform provider BrightHive to develop a code of ethics for data scientists, developing 20 principles that
include “I will respect human dignity” and “It’s my responsibility to increase social benefit while minimizing harm”.
270
A related effort, produced by the Institute for the Future and the Omidyar Network, is the Ethical OS Toolkit,
a set of prompts and checklists to help technology developers “anticipat[e] the future impact of today’s technology”
and “not […] regret the things you will build”. 271

agenda focuses on four core areas: bias and inclusion, rights and liberties, labor and automation, and safety
and critical infrastructure.
258
Sharlach, Molly. 2019. Princeton collaboration brings new insights to the ethics of artificial intelligence.
https://www.princeton.edu/news/2019/01/14/princeton-collaboration-bringsnew-insights-ethics-artificial-
intelligence
259
MIT Media Lab. 2017. MIT Media Lab to participate in $27 million initiative on AI ethics and governance. MIT
News. https://news.mit.edu/2017/mit-media-lab-to-participate-in-aiethics-and-governance-initiative-0110.
260
MIT News Office. 2018. MIT reshapes itself to shape the future. MIT News. http://news.mit.edu/2018/mit-
reshapes-itself-stephen-schwarzman-college-of-computing1015.
261
Adams, Amy. 2019. Stanford University launches the Institute for Human-Centered Artificial Intelligence.
Stanford News. https://news.stanford.edu/2019/03/18/stanford_university_launches_human-centered_ai/
262
Marowski, Steve. 2020. Artificial intelligence researchers create ethics center at University of Michigan.
MLive. https://www.mlive.com/news/ann-arbor/2020/01/artificial-intelligenceresearchers-create-ethics-
center-at-university-of-michigan.html.
263
Robinson, David, and Miranda Bogen. 2016. Data Ethics: Investing Wisely in Data at Scale. Upturn.
https://www.upturn.org/static/reports/2016/data-ethics/files/Upturn_-_Data%20Ethics_v.1.0.pdf
264
Mozilla. 2018. Announcing a Competition for Ethics in Computer Science, with up to $3.5 Million in Prizes.
The Mozilla Blog. https://blog.mozilla.org/blog/2018/10/10/announcing-acompetition-for-ethics-in-computer-
science-with-up-to-3-5-million-in-prizes/.
265
https://datasociety.net/
266
https://www.upturn.org/
267
https://www.humanetech.com/
268
https://tacticaltech.org/
269
Eubanks, Virginia. 2018b. A Hippocratic Oath for Data Science. https://virginiaeubanks.com/2018/02/21/a-
hippocratic-oath-for-data-science/.
270
Data4Democracy. 2018. Ethics Resources. https://github.com/Data4Democracy/ethics-resources.
271
The Institute for the Future, and Omidyar Network. 2018. Ethical OS Toolkit. https://ethicalos.org

44
AI for Good

AI for good-will\he idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within
information societies in general and the AI community in particular. It has the potential to address social problems
effectively through the development of AI-based solutions. Initiatives relying on artificial intelligence (AI) to deliver
socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. Some advocate the use of the United
Nations’ Sustainable Development Goals (SDGs)272 as a benchmark for tracing the scope and spread of AI4SG.
273

The AI for Good series is the leading action-oriented, global & inclusive United Nations platform on AI. The Summit
is organized all year, always online, in Geneva by the ITU with XPRIZE Foundation in partnership with over 35 sister
United Nations agencies, Switzerland and ACM. The goal is to identify practical applications of AI and scale those
solutions for global impact.274

Important other initiatives include Mila’s AI for Combating Human Trafficking in Canada, 275 which leverages AI
techniques to analyze the online advertising market and to provide law enforcement with actionable intelligence to
counter human trafficking. These techniques are at the intersection of data mining, active and human-in-the-loop
learning, graph mining, natural language processing, information retrieval, and image processing.

Government and AI Ethics

Many governments have also taken up AI ethics, developing commissions and principles dedicated to the topic. In
the United States, for example, as discussed above, the Department of Defense adopted ethical principles for AI.
The first military to do so. 276 Elsewhere, governing bodies in Dubai, 277 Europe, 278 Japan, 279 and Mexico 280 as
well as international organizations such as the OECD Organization for Economic Co-operation and Development,281
have all put forward documents stating principles for ethical AI development.

A 2019 analysis of global AI ethics guidelines found 84 such documents (with more than a third from the U.S. and
U.K. and none from Africa or South America) espousing a common set of principles: transparency, justice and
fairness, non-maleficence, responsibility, and privacy.282

272
Truby, J. Governing Artificial Intelligence to benefit the UN Sustainable Development Goals. Sustainable
Development. 2020; 28: 946– 959. https://doi.org/10.1002/sd.2048 , Available at SSRN:
https://ssrn.com/abstract=3667166
273
Cowls, Josh and Tsamados, Andreas and Taddeo, Mariarosaria and Floridi, Luciano, A Definition, Benchmark and
Database of AI for Social Good Initiatives (February 17, 2021). Available at SSRN: https://ssrn.com/abstract=3826465
274
https://aiforgood.itu.int/about-us/
275
https://mila.quebec/en/project/ai-for-combating-human-trafficking-in-canada/
276
U.S. Department of Defense. 2020. DOD Adopts Ethical Principles for Artificial Intelligence.
https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adoptsethical-principles-for-
artificial-intelligence/.
277
Smart Dubai. 2018. AI Ethics Principles & Guidelines. https://www.smartdubai.ae/docs/defaultsource/ai-
principles-resources/ai-ethics.pdf?sfvrsn=d4184f8d_6.
278
European Commission High-Level Expert Group on Artificial Intelligence. 2019. Ethics Guidelines for
Trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation.
279
Integrated Innovation Strategy Promotion Council. 2019. AI for Everyone: People, Industries, Regions and
Governments. https://www8.cao.go.jp/cstp/english/humancentricai.pdf.
280
Martinho-Truswell, Emma, Hannah Miller, Isak Nti Asare, André Petheram, Richard Stirling, Constanza
Go΄mez Mont, and Cristina Marti΄nez. 2018. Hacia una Estrategia de IA en México: Aprovechando la
Revolucio΄n de la IA (Towards an AI Strategy in Mexico: Leveraging the AI Revolution).
https://docs.wixstatic.com/ugd/7be025_ba24a518a53a4275af4d7ff63b4cf594.pdf
281
Organisation for Economic Co-operation and Development. 2019. Recommendation of the Council on
Artificial Intelligence. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
282
Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. "The global landscape of AI ethics guidelines." Nature
Machine Intelligence 1 (9):389-399. doi: 10.1038/s42256-019-0088-2.

45
Given the difficulties enumerated above and recognizing that AI implementing technologies are developing so
swiftly that it is almost impossible for traditional legislation to keep up with them let alone get ahead, the World
Economic Forum has created an ‘agile governance’ approach which incorporates many of the ideas in this white
paper. 283The basic observation is that governments are responsible for protecting citizens from various harms
caused by new products and technologies; this is traditionally accomplished by holding perpetrators accountable
once the harm has occurred. With AI impacting society at unprecedented speed, scope, and scale, governments
must protect the public before the harm occurs by promoting the responsible design, development and use of this
transformative technology. This requires a more agile type of regulator (i.e., one that is proactively working with
companies to ensure safety up front and not after-the-fact), without stifling the many societally beneficial uses of
AI. The regulator of the future must be expert, nimble and work with companies to certify their products as fit for
their purpose. This will not only protect citizens but also encourage innovation in the AI space because companies
will not be at risk of wasting R&D expenditures on products that may be banned or regulated in the future.284

Recently, National Security Commission on AI staff, Paul Leka and Christie Lawrence, spoke about the U.S. forming
an International Emerging Technology Coalition to strengthen and coordinate the use of emerging technology for
democratic ends.285

The Limits of AI Ethics

Despite the rapid adoption of “ethics” as an analytic frame for digital technologies, critical analyses of these early
efforts have indicated that tech ethics suffers from core limitations. First, the actual principles espoused by tech
ethics statements are too abstract and lacking in enforcement mechanisms to reliably spur ethical behavior in
practice. Second, as ethics is incorporated into tech companies, ethical ideals are subsumed into corporate logic
and incentives. Third, by emphasizing the design decisions of individual engineers, tech ethics overlooks the
structural forces that shape technology’s harmful social impacts. Collectively, these issues suggest that the
emphasis on ethics represents a strategy of technology companies “ethics-washing” their behavior with a façade
of ethics while largely continuing with business-as-usual. 286

Moreover, companies are moving towards entirely AI based boards and giving them the same level of discretion as
humans to recommend particular courses of action. 287 This is already happening on the board of software company
Salesforce, where a robot named ‘Einstein’ is consulted to appraise corporate plans in order to aid decision making.
288
In this ‘human-in-command’ scenario, directors will make the final decision and can contribute some human
justification. Difficulty will arise when the human decision departs from what the super-computer informs them is
the objectively best course of action. It is not hard to imagine shareholder action against the board for deviating
from what AI suggests. This raises a number of issues in relation to accountability. 289

283
Agile Governance: Reimagining Policymaking in the Fourth Industrial Revolution, WORLD ECONOMIC FORUM, (Apr.
2018), https://www.weforum.org/whitepapers/agile-governance-reimaginingpolicy-making-in-the-fourth-industrial-
revolution.
284
Id.
285
Eye on AI, A Democratic AI Coalition (June 30, 2021) (Craig Smith interviews National Security Commission on AI
staff Paul Leka and Christie Lawrence talk about forming an International Emerging Technology Coalition to strengthen and
coordinate the use of emerging technology for democratic ends.) https://www.youtube.com/watch?v=6nWd6QHNx0o
286
Green, Ben, The Contestation of Tech Ethics: A Sociotechnical Approach to Ethics and Technology in Action
(June 3, 2021). Available at SSRN: https://ssrn.com/abstract=3859358
287
Armour, John, and Horst Eidenmueller. 2019. ‘Self-Driving Corporations?’ European Corporate Governance Institute-Law
Working Paper, 475. (, consider it likely that competition will push companies towards entirely AI based boards and raise the
issue of giving them the same level of discretion as humans. ); See also Libert, Barry, Megan Beck and Mark Bonchek. 2017,
‘AI in the Boardroom: The Next Realm of Corporate Governance’, MIT Sloan Management Review
https://sloanreview.mit.edu/article/ai-in-the-boardroom-the-next-realm-of-corporate-governance/
288
Paquette, Danielle. 2018. ‘In Boardroom, Robot Gets a Seat at the Table’. Washington Post.Com, 26 January 2018, sec.
wonkblog.
289
Hickman, Eleanore and Petrin, Martin, Trustworthy AI and Corporate Governance – The EU’s Ethics Guidelines For
Trustworthy Artificial Intelligence from a Company Law Perspective (May 21, 2020). Available at SSRN:
https://ssrn.com/abstract=3607225 or http://dx.doi.org/10.2139/ssrn.3607225

46
At the World Economic Forum in Davos, Switzerland, Salesforce CEO Marc Benioff reported the latest status of AI
in his company, a robot named Einstein, who sits in on the company’s weekly meetings and offers important
insights. “I have my top 30 or 40 executives around my table. And we figure out how we are doing as we look at
all of this analysis. But now I have a new person with me, and it’s kind of an empty chair. We have a technology
called Einstein,” he said at the conference. “I ask Einstein, ‘I heard what everybody said but what do you actually
think?'” Benioff shared as an example of his interactions with the robot. One time, he said, Einstein questioned the
sales report of a Salesforce employee in Europe. The employee was upset, but Einstein managed to quickly crunch
his sales data and brought up evidence to back its argument. 290

But can AI and robots speaking freely in the Boardroom improve corporate governance? Cut throat corporate
politics are legend and nice guys do not always win. As in the Salesforce example, will the robots call out poor
performance that the boss does not see, and colleagues dare not name, for fear of retaliation or shunning. Can
these robots take the place of whistleblowers, without paying the high price? Can the robot improve governance
at privately held companies, which are not subject to public disclosure and regulatory oversight, with the result of
rampant sexual harassment, the lack of women leaders in technology companies, the relative absence of female
venture capitalists, and the dearth of female board members? 291 Can a woke AI robot at corporations result improve
society, if fed the best data?292

Business as Usual at Big Tech

Limitations of big tech’s ethics are evidenced by issues raised by the following shadowbanning, deep fakes, fake
news, and lack of transparency (e.g., not telling employees what their building):

Shadowbanning: Many online platforms utilize algorithms for a variety of content architecture, organization, and
moderation purposes. 293However, few details regarding the inner workings of these algorithms have been made
open to the public. 294As a result, few (if any) users are truly aware of the impact of these algorithms, or how they
structure our perceptions and experiences online. Experiencing a sudden dip in likes, views, and comments are all
indicative of the phenomena known as shadowbanning. Shadowbanning, which involves the partial censorship of
online accounts without the knowledge or consent of the user, is one form of algorithmic censorship.

Examples of shadowbanning on Instagram include rendering a user’s hashtags undiscoverable, restricting account
visibility to followers only (as opposed to the broader Instagram community), preventing the account handle from
auto-populating in the search bar, or filtering posts out of the feeds of followers. It is important to note though

290
In addition to assisting Benioff during his staff meetings, however, a more important function of Einstein is to make
Salesforce’s CRM (customer relation management) platforms, a flagship product, smarter. On the technical level, Einstein
eliminates the need for companies to manually prepare data input or build models on Salesforce platforms to fit
organizations’ needs. All Einstein needs is a set of raw data, sales data, for example, and the machine learning algorithms in
Einstein will come up with models that can accurately identify sales leads and predict demand volume—similar to how
Spotify recommends songs based on your music preferences, but on a larger scale. Einstein is the first comprehensive
application of AI in CRM platforms. Salesforce began building the technology in 2014. To consolidate research power,
Salesforce acquired several startups specializing in machine learning, image recognition and predictive analytics in the
following years, and introduced it in September 2016. Sissi Cao, A Robot Named Einstein Now Sits in on Salesforce’s
Weekly Meetings, Observer, (Jan. 25, 2018) https://observer.com/2018/01/salesforces-ai-status-robot-einstein-weekly-
meetings/
291
Fan, Jennifer S., Innovating Inclusion: The Impact of Women on Private Company Boards (April 10, 2019). Florida State
University Law Review, Vol. 46, No. 2, Pp. 345-413, 2019, Forthcoming, Available at SSRN:
https://ssrn.com/abstract=3369841
292
Fan, Jennifer S., Woke Capital: The Role of Corporations in Social Movements (December 2019). Harvard Business Law
Review, Vol. 9, No. 2, Pp. 441-94 (2019), Available at SSRN: https://ssrn.com/abstract=3556120
293
Horten, Monica, Algorithms Patrolling Content: Where’s the Harm? (February 22, 2021). Available at SSRN:
https://ssrn.com/abstract=3792097 or http://dx.doi.org/10.2139/ssrn.3792097
294
EMILY SALTZ and CLAIRE LEIBOWICZ, Fact-Checks, Info Hubs, and Shadow-Bans: A Landscape Review of
Misinformation Interventions, Partnership on AI, (JUNE 14, 2021)
https://www.partnershiponai.org/intervention-inventory/

47
that from the account owner’s perspective, nothing has changed. This form of censorship is typically only noticed
after its effects have been felt, by observing a drop in comments, likes, and views. 295

Today, a quickly growing percentage of communication takes place online. Platforms that are privately owned
communication spaces have become systemically important for public discourse, in itself a key element of a free
and democratic society. The internet has heavily influenced our communicative practices and will continue to do
so. As the European Court of Human Rights noted in 2015, the internet is ‘one of the principal means by which
individuals exercise their right to freedom to receive and impart information and ideas, providing [...] essential
tools for participation in activities and discussions concerning political issues and issues of general interest’. It plays
‘ particularly important role with respect to the right to freedom of expression. Due to technological innovation,
social media platforms are now able to de facto (and de jure) regulate speech in real time at any time. The
platforms do not only set the rules for communication and judge on their application but also moderate, curate,
rate and edit content according to their rules. 296

An issue of growing importance is that of “platform” law: if a user’s online content or actions violate the rules,
what should happen next? The longstanding expectation is that Internet services should remove violative content
or accounts from their services, and many laws mandate that result. However, Internet services have a wide range
of other options they can use to redress content or accounts that violate the applicable rules. 297
There are also substantial and legitimate concerns about “deepfake” videos that are completely fictional but look
authentic. 298 “Cheap fakes,” a euphemism for manipulated authentic videos are a disturbing development as well.
299

Fake News: “Fake news” has become the central inflammatory charge in media discourse in the United States
since the 2016 presidential contest. In the political realm, both intentionally fabricated information and the “fake
news” defense by politicians confronted with negative press reports can potentially influence public beliefs and
possibly even skew electoral results. Perhaps even more insidiously, as evidenced by President Trump’s dismissal
of the traditional press as the “enemy of the American people,” the “fake news” accusation can serve as a power-
shifting governance mechanism to delegitimize the institutional press as a whole. Both these strategic uses of “fake
news”—to achieve specific political results and to destabilize the press as an institution—are self-evidently very
dangerous for democracy. As if this were not a sufficient threat to the democratic order, however, “fake news” is
also a threat, inter alia, to the stability of the financial markets as well. Whether for competitive advantage, terror,
or geopolitical gamesmanship, the deployment of market-affecting fabricated information is a looming danger

295
Middlebrook, Callie, The Grey Area: Instagram, Shadowbanning, and the Erasure of Marginalized
Communities (February 17, 2020). Available at SSRN: https://ssrn.com/abstract=3539721 or
http://dx.doi.org/10.2139/ssrn.3539721
296
Kettemann, M. C. & Tiedeke, A. S. (2020). Back up: can users sue platforms to reinstate deleted content?.
Internet Policy Review, 9(2). https://doi.org/10.14763/2020.2.1484.
297
Goldman, Eric, Content Moderation Remedies (2021). Michigan Technology Law Review, Forthcoming, Santa
Clara Univ. Legal Studies Research Paper , Available at SSRN: https://ssrn.com/abstract=3810580 or
http://dx.doi.org/10.2139/ssrn.3810580 (“The Article describes dozens of remedies that Internet services
have actually imposed. The Article then provides a normative framework to help Internet services and
regulators navigate these remedial options to address the many difficult tradeoffs involved in content
moderation. By moving past the binary remove-or-not remedy framework that dominates the current
discourse about content moderation, this Article helps to improve the efficacy of content moderation
regulators navigate these remedial options to address the many difficult tradeoffs involved in content
moderation. By moving past the binary remove-or-not remedy framework that dominates the current
discourse about content moderation, this Article helps to improve the efficacy of content moderation,
promote free expression, promote competition among Internet services, and improve Internet services’
community-building functions.”)
298
See Robert Chesney & Danielle Keats Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and
National Security, 107 CALIF. L. REV. 1753 (2019).
299
Britt Paris & Joan Donovan, Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence,
DATA &
SOC., Sept. 2019, https://datasociety.net/wpcontent/uploads/2019/09/DS_Deepfakes_Cheap_FakesFinal-
1.pdf.

48
ahead. Simply put, therefore, “fake news” presents profound challenges for both the public and private spheres
today. At one end of the spectrum is “real” fake news meaning intentionally fabricated misinformation. This kind
of “fake news” consists of the dissemination of falsity, in whole or in part, whether for economic or political reasons-
intentionally fabricated falsity distinct from mainstream press errors, inaccuracies, incompleteness and even
slanted presentation of news and information. At the other end of the spectrum is the use of the “fake news”
phrase as a strategic tool to cast doubt on the truthfulness and credibility of standard mainstream news reporting
organizations. Of course, the deployment of each type of “fake news” can undermine public trust in the truth of
what is reported. 300

With respect to legal measures, a recent troubling trend shows governments citing to the rise of disinformation
and misinformation 301 to justify new laws that prohibit online falsehoods and often require social media platforms
to assist in implementing such measures. 302 Within the one-year period from June 2017 through May 2018, “at
least 17 countries approved or proposed laws that would restrict online media in the name of fighting ‘fake news’
and online manipulation.” 303 These laws have taken hold throughout the world, including in Western democracies
such as Canada. 304 Human rights watchdogs have consistently criticized these laws as violations of international
freedom of expression protections. 305

300
Levi, Lili, Real 'Fake News' and Fake 'Fake News' (January 5, 2018). 16 First Amend. L. Rev., Forthcoming,
University of Miami Legal Studies Research Paper No. 18-1, Available at SSRN:
https://ssrn.com/abstract=3097200
301
“Disinformation” as defined by Claire Wardle and Hossein Derakhshan: “information that is false and
deliberately created to harm a person, social group, organization or country.” Samuel Spies, Defining
“Disinformation,” MEDIAWELL(Oct. 22, 2019), https://mediawell.ssrc.org/literature-reviews/defining-
disinformation/versions/1-0/ [https://perma.cc/7DNJ-HZXM]. See also Wardle and Derakhshan’s definition of
“misinformation,” i.e., “information that is false, but not created with the intention of causing harm.” Id.
302
Allie Funk, Citing ‘Fake News,’ Singapore Could Be Next to Quash Freedom of Expression, JUST SECURITY
(Apr. 8, 2019), https://www.justsecurity.org/63522/citing-fakenews-singapore-could-be-next-to-quash-free-
expression/ [https://perma.cc/9QTJ-Y8R5] (noting a regional trend in Asia to ban falsehoods, including in
China, Cambodia, Malaysia, the Philippines, and Vietnam); David Kaye (Special Rapporteur on the Promotion
and Protection of the Right to Freedom of Opinion and Expression), Freedom of Expression and Elections in
the Digital Age, 8–9, U.N. Research Paper 1/2019 (June 2019) [hereinafter Elections in the Digital Age],
https://www.ohchr.org/Documents/Issues/Opinion/Elections \ReportDigitalAge.pdf [https://perma.cc/M6EG-
2CHY] (expressing concern “with recent legislative and regulatory initiatives to restrict ‘fake news’ and
disinformation” throughout the world, including in Italy, Malaysia, and France).
303
Funk, supra . Nigeria is one of the countries to most recently consider adoption of a “fake news” ban.
Danielle Paquette, Nigeria’s ‘Fake News’ Bill Could Jail People for Lying on Social Media. Critics Call it
Censorship., WASH. POST (Nov. 25, 2019, 8:46 AM), https://www.washingtonpost.com/world/africa/nigerias-
fake-news-bill-could-jail-peoplefor-lying-on-social-media-critics-call-it-censorship/2019/11/25/ccf33c54-0f81-
11ea-a533-90a7becf7713_story.html [https://perma.cc/5CVE-UKB2] (reporting that Nigeria’s draft bill would
criminalize speech that could “threaten national security, sway elections or ‘diminish public confidence’ in the
government” and Internet access could be disrupted for violators).
304
See Michael Karanicolas, Canada’s Fake News Laws Face a Charter Challenge. That’s a Good Thing, OTTAWA
CITIZEN (Oct. 14, 2019), https://ottawacitizen.com/opinion/columnists/karanicolas-canadas-fake-news-laws-
face-a-charter-challenge-thats-a-goodthing [https://perma.cc/6BDK-T2LD](discussing a variety of overbroad
provisions in Canada’s new law banning falsehoods during campaigns, which “gives the government a potential
weapon to wield against its critics” and “can exert a chilling effect against legitimate speech, particularly when a
prison term is attached to the offence.”).
305
See, e.g., Elections in the Digital Age, supra note 7; Funk, supra note 7; Jordan: Fake News Amendments Need
Revision, HUMAN RIGHTS WATCH (Feb. 21, 2019, 12:00 AM), https://www.hrw.org/news/2019/02/21/jordan-
fake-news-amendments-need-revision [https://perma.cc/2SW5-AGMR]; Philippines: Reject Sweeping ‘Fake
News’ Bill, HUMAN RIGHTS WATCH (July 25, 2019, 8:00 PM),
https://www.hrw.org/news/2019/07/25/philippines-rejectsweeping-fake-news-bill [https://perma.cc/2D5W-
VZQG]; Singapore: Reject Sweeping ‘Fake News’ Bill, HUMAN RIGHTS WATCH (Apr. 3, 2019, 9:00 AM),
https://www.hrw.org/news/2019/04/03/singapore-reject-sweeping-fake-news-bill [https://perma.cc/6ZAEUAN2].

49
Should governments and platforms seek to ban online falsehoods? Does exercising corporate responsibility mean
social media platforms should assist or resist the implementation of laws that outlaw the dissemination of online
falsehoods? Regardless of national laws on this topic, what does corporate responsibility mean in the context of
global platforms’ own policies on disinformation and misinformation? 306

In light of this complexity, no single—or simple—tactic is sufficient to address the variety of challenges posed by
the multi-headed phenomenon of “fake news.” appeals to courts, legislators, and government actors at every level
to back up an ostensible commitment to free speech with an equally robust commitment to a free press. It also
calls on the press to revise its practices in response. 307

Cambridge Analytica: In 2018, The New York Times and The Guardian reported that the voter profiling firm
Cambridge Analytica had harvested information from the Facebook profiles of more than 50 million (later revealed
to be 87 million) people, without their knowledge or permission, in order to target political ads to benefit Donald
Trump’s 2016 presidential campaign. 308 Cambridge Analytica had acquired this data by exploiting the sieve-like
nature of Facebook’s privacy policy. These revelations raised new questions about how carefully Facebook protects
user data and privacy: despite having learned about this data harvesting by 2015, the company did not alert users
and took only cursory actions to protect the data from further misuse. After the Cambridge Analytica story was
reported, Congress summoned Mark Zuckerberg to testify about Facebook’s practices 309 and a concerted effort
arose among Facebook users to delete their profiles 310.

Military and ICE Contracts not Revealed to the Employees: In 2018, the technology website Gizmodo
revealed that Google was working with the U.S. Department of Defense to develop artificial intelligence software
that could analyze drone footage. This effort, known as Project Maven, was part of a $7.4 billion investment in AI
by the DoD in 2017 311 and represented an opportunity for Google to gain billions of dollars in future defense
contracts. 312 The project set off intense debate within Google, as many engineers expressed concern about
facilitating drone strikes and began organizing to assert that “Google should not be in the business of war” 313
Project Maven, along with similar stories about tech companies partnering with the government, such as reports
that Palantir was developing software for Immigration and Customs Enforcement (ICE) to facilitate deportation

306
Aswad, Evelyn, In a World of 'Fake News,' What's a Social Media Company to Do? (December 9, 2019). Utah
Law Review, Vol. 2020, No. 4, 2020, Available at SSRN: https://ssrn.com/abstract=3640640 (Aswad, Evelyn,
In a World of 'Fake News,' What's a Social Media Company to Do? (December 9, 2019). Utah Law Review,
Vol. 2020, No. 4, 2020, Available at SSRN: https://ssrn.com/abstract=3640640)
307
Levi, Lili, Real 'Fake News' and Fake 'Fake News' (January 5, 2018). 16 First Amend. L. Rev., Forthcoming,
University of Miami Legal Studies Research Paper No. 18-1, Available at SSRN:
https://ssrn.com/abstract=3097200
308
Cadwalladr, Carole, and Emma Graham-Harrison. 2018. Revealed: 50 million Facebook profiles
harvested for Cambridge Analytica in major data breach. The Guardian.
https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebookinfluence-us-election.
309
Cecilia Kang, Tiffany Hsu, Kevin Roose, Natasha Singer, and Matthew Rosenberg. 2018. Mark Zuckerberg
Testimony:
Day 2 Brings Tougher Questioning. The New York Times (April 2018).
https://www.nytimes.com/2018/04/11/us/politics/zuckerberg-facebook-cambridge-analytica.html
310
Hsu, Tiffany. 2018. For Many Facebook Users, a ‘Last Straw’ That Led Them to Quit. The New
York Times. https://www.nytimes.com/2018/03/21/technology/users-abandonfacebook.html.
311
Cameron, Dell, and Kate Conger. 2018. Google Is Helping the Pentagon Build AI for Drones.
Gizmodo. https://gizmodo.com/google-is-helping-the-pentagon-build-ai-for-drones1823464533 and Conger,
Kate, and Cade Metz. 2018. Tech Workers Now Want to Know: What Are We Building This For? The New
York Times. https://www.nytimes.com/2018/10/07/technology/techworkers-ask-censorship-surveillance.html
312
Tiku, Nitasha. 2019. Three Years of Misery Inside Google, the Happiest Company in Tech. Wired.
https://www.wired.com/story/inside-google-three-years-misery-happiest-company-tech/.
313
Shane, Scott, and Daisuke Wakabayashi. 2018. ‘The Business of War’: Google Employees Protest Work for
the Pentagon. The New York Times.https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-
pentagonproject.html

50
314
prompted new organizing among tech workers and computer science students in opposition to tech industry
contracts with U.S. defense and intelligence agencies, often centered around the slogans #TechWontBuildIt and
#NoTechForICE. 315

Algorithmic Bias: In 2016, ProPublica revealed that an algorithm used in criminal courts was biased against Black
defendants, mislabeling them as future criminals at twice the rates of white defendants 316. The dispute about
whether the algorithm in question was, in fact, biased 317 built interest in the emerging field of “algorithmic fairness.”
Through books such as Cathy O’Neil’s Weapons of Math Destruction, 318 Virginia Eubanks’ Automating Inequality,
319
and Safiya Noble’s Algorithms of Oppression 320 as well as articles demonstrating the biases in algorithms used
in contexts ranging from facial recognition 321 to hiring,322 the public began to recognize algorithms as both fallible
and discriminatory—potentially the source of more harm than good.

Many in the tech sector and academia diagnosed these ills as the result of an inattention to AI ethics, a lack of
training in ethical reasoning for engineers and a dearth of ethical principles in engineering practice, which in turn
led to the development of unethical technologies. In response, academics, technologists, companies, governments,
and more have embraced a broad set of goals often characterized with the label “tech ethics” or “AI ethics”: the
introduction of considerations around ethics and social responsibility into digital technology education, research,
development, use, and governance. In the span of just a few years, AI ethics has become a dominant discourse
discussed in technology companies, academia, civil society organizations, and governments. For those committed
to combating an array of injustices connected to digital technologies, the rise of tech ethics has produced a range
of responses: on the one hand, excitement that technologists are increasingly considering their social
responsibilities and impacts; on the other hand, frustration regarding the limited scope and impacts of what tech
ethics discourse and practice has thus far entailed. 323

314
Woodman, Spencer. 2017. Palantir Provides the Engine for Donald Trump’s Deportation Machine. The
Intercept. https://theintercept.com/2017/03/02/palantir-provides-the-engine-fordonald-trumps-deportation-
machine/
315
Mijente. 2019. 1,200+ Students at 17 Universities Launch Campaign Targeting Palantir.
https://notechforice.com/20190916-2/.; Goldberg, Emma. 2020. ‘Techlash’ Hits College Campuses. The New
York Times.
https://www.nytimes.com/2020/01/11/style/college-tech-recruiting.html; Glaser, April, and Will Oremus. 2018.
“A Collective Aghastness”: Why Silicon Valley workers are demanding their employers stop doing business
with the Trump administration. Slate. https://slate.com/technology/2018/06/the-tech-workers-coalition-explains-
how-siliconvalley-employees-are-forcing-companies-to-stop-doing-business-with-trump.html. Conger, Kate, and
Cade Metz. 2018. Tech Workers Now Want to Know: What Are We Building This For? The New York Times.
https://www.nytimes.com/2018/10/07/technology/techworkers-ask-censorship-surveillance.html
316
Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. ProPublica.
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminalsentencing.
317
Dieterich, William, Christina Mendoza, and Tim Brennan. 2016. COMPAS Risk Scales: Demonstrating
Accuracy Equity and Predictive Parity. Northpoint Inc. Research Department.
http://go.volarisgroup.com/rs/430-MBX989/images/ProPublica_Commentary_Final_070616.pdf and
Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. 2016. "Inherent trade-offs in the fair
determination of risk scores." arXiv preprint arXiv:1609.05807.
318
https://weaponsofmathdestructionbook.com/
319
https://virginia-eubanks.com/books/
320
https://nyupress.org/9781479837243/algorithms-of-oppression/
321
Buolamwini, Joy, and Timnit Gebru. 2018. "Gender Shades: Intersectional Accuracy Disparities in Commercial
Gender Classification." Proceedings of the 1st Conference on Fairness, Accountability and Transparency,
Proceedings of Machine Learning Research.
322
Dastin, Jeffrey. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
https://www.reuters.com/article/us-amazon-com-jobs-automationinsight/amazon-scraps-secret-ai-recruiting-
tool-that-showed-bias-against-womenidUSKCN1MK08G.
323
Green, Ben, The Contestation of Tech Ethics: A Sociotechnical Approach to Ethics and Technology in Action
(June 3, 2021). Available at SSRN: https://ssrn.com/abstract=3859358

51
As evidence of AI ethics’ limitations has grown, many have critiqued AI ethics as a strategic effort among technology
companies to quell public scrutiny rather than as a sincere effort to take responsibility for technology’s social
impacts. This strategy has been labeled “ethics-washing” (i.e., “ethical white-washing”): using the language of
ethics to paint a superficial portrait of ethical behavior in order to avoid heightened public backlash and the
introduction of regulations that would require substantive concessions. 324

Some view AI ethics discourse as a convenient way for tech companies to defuse criticism and regulation by
creating structures for self-governance without any commitment to meaningfully altering their behavior. As an
ethnography of ethics in Silicon Valley found, “It is a routine experience at ‘ethics’ events and workshops in Silicon
Valley to hear ethics framed as a form of self-regulation necessary to stave off increased governmental regulation”.
325
Recognizing this strategy casts important “flaws” of tech ethics as features rather than bugs: by focusing public
attention on the actions of individual engineers and particular technical limitations (such as algorithmic bias),
companies perform a sleight-of-hand that shifts structural questions about power and profit out of view.

Thomas Metzinger, a philosopher who served on the European Commission’s High-Level Expert Group (HLEG) on
Artificial Intelligence to develop AI ethics guidelines 326, provides a particularly striking account of ethics-washing
in action. 327 The HLEG on AI contained only four ethicists out of 52 total people and was dominated by
representatives from industry. Metzinger’s own work to develop “Red Lines” that AI applications should not cross
was “watered down” by industry representatives eager for a “positive vision” for AI. All told, Metzinger describes
the HLEG’s guidelines as “lukewarm, short-sighted and deliberately vague” and concludes that the tech industry is
“using ethics debates as elegant public decorations for a large-scale investment strategy”. 328 The AI HLEG expert
group of the European Commission 329has identified four ethical principles: (i) respect for human autonomy,(ii)
prevention of harm, (iii) fairness,(iv) explicability. In addition to these ethical principles and probably also for
enforcing them (and especially the second one), the same group introduced seven requirements that should be
taken for the development of AI applications, which have been detailed in the “Assessment List for Trustworthy
Artificial Intelligence (ALTAI)” 330 1. human involvement and surveillance; 2. technical robustness and safety; 3.
respect for privacy and data governance;4. Transparency; 5. accountability; 6. the well-being of society and the
environment; 7. diversity, non-discrimination, and equity.

Responsible Development of AI and Public Perception

324
Metzinger, Thomas. 2019. Ethics washing made in Europe. Der
Tagesspiegel.https://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-
ineurope/24195496.html.; See also Nemitz, Paul. 2018. "Constitutional democracy and technology in the age
of artificial intelligence." Philosophical Transactions of the Royal Society A: Mathematical, Physical and
Engineering Sciences 376 (2133). doi: 10.1098/rsta.2018.0089; See also Wagner, Ben. 2018. "Ethics as
Escape From Regulation: From Ethics-Washing to EthicsShopping?" In Being Profiling. Cogitas Ergo Sum,
edited by Emre Bayamlioglu, Irina Baraliuc, Liisa Albertha Wilhelmina Janssens and Mireille Hildebrandt.
Amsterdam University Press
325
Metcalf, Jacob, Emanuel Moss, and danah boyd. 2019. "Owning Ethics: Corporate Logics, Silicon Valley, and
the Institutionalization of Ethics." Social Research 86 (2):449-476.
326
European Commission High-Level Expert Group on Artificial Intelligence. 2019. Ethics Guidelines for
Trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation.
327
Metzinger, Thomas. 2019. Ethics washing made in Europe. Der Tagesspiegel.
https://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-ineurope/24195496.html.
328
Id.
329
AI HLEG, Ethics Guidelines for Trustworthy AI, Guidelines at 14 (Apr. 8, 2019), available at
https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf
330
AI HLEG, Ethics Guidelines for Trustworthy AI, Guidelines (Apr. 8, 2019), available at
https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419.

52
AI is the focus for a growing range of public concerns as well as optimism. 331 Many stakeholders, including in
industry, recognize the importance of public trust in the safety and benefits offered by AI if it is to be deployed
successfully. 332

AI is likely to inspire greater public confidence than a narrative focused more on technological dominance or
leadership. Other powerful new technologies, such as CRISPR, genetically modified organisms 333and nuclear
power, 334have in the past proven controversial, with significant communities arguing for a cautious, safety-first
approach, to which the rhetoric of the race is antithetical.

Employees and Public Pressure Check the Tech Industry’s Ethical Limitations and Drive AI Ethics

Project Maven

Project Maven, the fast-moving Pentagon project also known as the Algorithmic Warfare Cross-Functional Team
(AWCFT) is, as mentioned above, a military pilot program which aims to speed up analysis of drone footage by
automatically classifying images of objects and people. This specific project is a pilot with the Department of
Defense to provide technology that can assist in object recognition on unclassified data. The technology flags
images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid
concerns. As will be discussed below, Google failed to inform its employees what they were building, which led to
protests when they discovered the truth. The fall out led to other tech companies and the military to actively
discuss this important topic and to develop policies and safeguards around the development and use of their
machine learning technologies. 335

Project Maven was tasked with using machine learning to identify vehicles and other objects in drone footage,
taking that burden off analysts. Maven’s initial goal was to provide the military with advanced computer vision,
enabling the automated detection and identification of objects in as many as 38 categories captured by a drone’s
full-motion camera.

Project Maven, as envisioned, was about building a tool that could process drone footage quickly and in a useful
way. The Defense Department specifically tied this task to the Defeat-ISIS campaign. Drones are intelligence,
surveillance, and reconnaissance platforms first and foremost. The unblinking eyes of Reapers, 336 Global Hawks,337
and Gray Eagles338 record hours and hours of footage every mission, imagery that takes a long time for human

331
Fast, E., & Horvitz, E. 2017. Long-Term Trends in the Public Perception of Artificial Intelligence. In Proceedings of the
Thirty-First AAAI Conference on Artificial Intelligence 963-969. Menlo Park, Calif.: International Joint Conferences on
Artificial Intelligence, Inc. https://ojs.aaai.org/index.php/AAAI/article/view/10635 (“Analyses of text corpora over time can
reveal trends in beliefs, interest, and sentiment about a topic. We focus on views expressed about artificial intelligence (AI)
in the New York Times over a 30-year period. General interest, awareness, and discussion about AI has waxed and waned
since the field was founded in 1956. We present a set of measures that captures levels of engagement, measures of
pessimism and optimism, the prevalence of specific hopes and concerns, and topics that are linked to discussions about AI
over decades. We find that discussion of AI has increased sharply since 2009, and that these discussions have been
consistently more optimistic than pessimistic. However, when we examine specific concerns, we find that worries of loss of
control of AI, ethical concerns for AI, and the negative impact of AI on work have grown in recent years. We also find that
hopes for AI in healthcare and education have increased over time.”)
332
Banavar, G. (2016) Learning to trust artificial intelligence systems: Accountability, compliance, and ethics in the age of
smart machines. Armonk, NY: IBM Research
333
Aaron M. Shew, L. Lanier Nalley, Heather A. Snell, Rodolfo M. Nayga, Bruce L. Dixon, CRISPR versus GMOs: Public
acceptance and valuation, Global Food Security,Volume 19, 2018,Pages 71-80, https://doi.org/10.1016/j.gfs.2018.10.005.
334
Xiang, H., & Zhu, Y. (2011). The Ethics Issues of Nuclear Energy: Hard Lessons Learned from Chernobyl and
Fukushima. Online Journal of Health Ethics, 7(2). http://dx.doi.org/10.18785/ojhe.0702.06
335
Dell Cameron and Kate Conger, Google Is Helping the Pentagon Build AI for Drones, Gizmodo, (March 6,
2018) https://gizmodo.com/google-is-helping-the-pentagon-build-ai-for-drones-1823464533
336
https://en.wikipedia.org/wiki/General_Atomics_MQ-9_Reaper
337
https://en.wikipedia.org/wiki/Northrop_Grumman_RQ-4_Global_Hawk
338
https://en.wikipedia.org/wiki/General_Atomics_MQ-1C_Gray_Eagle

53
analysts to scan for salient details. While human analysts process footage, the ground situation is likely changing,
so even the most labor-intensive approach to analyzing drone video delivers delayed results.339

So, the Defense Department partnered with AI experts in the tech industry and academia, working through Defense
Information Unit Experimental, the department’s tech incubation program, and the Defense Innovation Board, an
advisory group created by former Secretary of Defense Ash Carter to bridge the technological gap between the
Pentagon and Silicon Valley. 340

Eric Schmidt, former executive chairman of Google parent company Alphabet, chaired the Defense Innovation
Board. During a July 2017 meeting, Schmidt and other members of the Defense Innovation Board discussed the
Department of Defense’s need to create a clearinghouse for training data that could be used to enhance the
military’s AI capability. Board members played an “advisory role” on Project Maven, according to meeting minutes,
while “some members of the Board’s teams are part of the executive steering group that is able to provide rapid
input” on Project Maven.341

In July 2017, Marine Corps Col. Drew Cukor, the chief of the Algorithmic Warfare Cross-Function Team, presented
on artificial intelligence and Project Maven at a defense conference. Cukor noted, “AI will not be selecting a target
[in combat] … any time soon. What AI will do is complement the human operator.” 342

By summer 2017, the Maven team set out to locate commercial partners whose expertise was needed to make its
AI dreams a reality. At the Defense One Tech Summit in Washington, Maven chief Marine Corps Col. Drew Cukor
said a symbiotic relationship between humans and computers was crucial to help weapon systems detect objects.
Speaking to a crowd of military and industry technology experts, many from Silicon Valley, Cukor professed the US
to be in the midst of AI arms race. “Many of you will have noted that Eric Schmidt is calling Google an AI company
now, not a data company,” he said, although Cukor did not specifically cite Google as a Maven partner. 343

When the Gizmodo article revealed that Google was offering its resources to the US Department of Defense for
Project Maven, more than 3,100 Google employees signed a letter urging Google CEO Sundar Pichai to reevaluate
the company’s involvement.344

Some employees resigned from Google over the drone technology partnership between Google and the U.S.
military, because they did not believe that Google was listening to or addressing their concerns. 345

339
Kelsey Atherton, Targeting the future of the DoD’s controversial Project Maven initiative, C4ISRNET, (July 27,
2018) https://www.c4isrnet.com/it-networks/2018/07/27/targeting-the-future-of-the-dods-controversial-
project-maven-initiative/
340
Dell Cameron and Kate Conger, Google Is Helping the Pentagon Build AI for Drones, Gizmodo, (March 6,
2018) https://gizmodo.com/google-is-helping-the-pentagon-build-ai-for-drones-1823464533
341
Id.
342
Id.
343
Id.
344
Dani Deahl, Google employees demand the company pull out of Pentagon AI project, The Verge, (April 4,
2018) https://www.theverge.com/2018/4/4/17199818/google-pentagon-project-maven-pull-out-letter-ceo-
sundar-pichai
345
See, e.g., Janet Burns, Google Employees Resign Over Company’s Pentagon Contract, Ethical Habits,
FORBES (May 14, 2018 12:46 PM), https://www.forbes.com/sites/janetwburns/2018/05/14/google-
employees-resign-over-firms-pentagon-contract-ethicalhabits/#7dd6f2a54169 [https://perma.cc/E2GF-
K9CC]. The “mass resignations . . . speak to
the strongly felt ethical concerns of the employees who are departing.” Kate Conger, Google Employees
Resign in Protest Against Pentagon Contract, GIZMODO (May 14, 2018, 6:00 AM),
https://gizmodo.com/google-employees-resign-in-protest-against-pentagon-con-1825729300
[https://perma.cc/6Y5D-TJHU] [hereinafter Conger, Google Employees Resign] (citing reasons why
employees resigned from Google over Project Maven, including being
at odds with what they understood the company to stand for and feeling as though their concerns were
unaddressed by management, to name a few). Over ninety academics in artificial intelligence, ethics, and
computer science also released an open letter to urge Google to end its work on Project Maven. The

54
Fei-Fei Li, the chief scientist of Google’s cloud-computing division until the end of 2018, is one of the foremost
experts in the field of artificial intelligence.346 Li is the inventor of ImageNet and the ImageNet Challenge, a critical
large-scale dataset and benchmarking effort that has contributed to the latest developments in deep learning and
AI. 347

During her two-year tenure at Google, Li worked on creating applications that Google could use for businesses that
purchased its cloud services, and touted “democratizing AI” by allowing more software developers and academic
researchers access to the advanced artificial intelligence tools that had been developed. 348 She now advocates for
this in her position as Co-Director of the Stanford Center for Human- Centered Artificial Intelligence (HAI). 349

When the Pentagon wanted to enter into a cloud contract to use Google’s artificial intelligence-powered image
recognition software for Project Maven, Li supported the contract but cautioned colleagues to avoid discussing the
artificial intelligence part of the deal because she feared that the public would be concerned about “weaponized”
artificial intelligence. 350 When the employees did find out, 3,000 Google employees signed a petition protesting
the initiative. About a dozen employees resigned. 351 According to one former Google employee, “[t]here’s a division

letter reads in part: If ethical action on the part of tech companies requires consideration of who might
benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves
more sober reflection—no technology has higher stakes—than algorithms meant to target and kill at a
distance and without public accountability. Id. The letter continued, “While Google regularly decides the
future of technology without democratic public engagement, its entry into military technologies casts the
problems of private control of information infrastructure into high relief.” Id.
346
Luke Stangel, Google AI Executive at the Center of Project Maven Is Quitting, SILICON VALLEY BUS. J.
(Sept. 11, 2018), https://www.bizjournals.com/sanjose/news/2018/09/11/google-ai-fei-fei-li-andrew-moore-
goog-maven.html [https://perma.cc/YXX5-Z9XK].
347
https://learning.acm.org/techtalks/ImageNet
348
Bloomberg News, Google’s AI Cloud Star Leaves After Pentagon Deal Protests, BLOOMBERG (Sept. 10, 2018,
5:49 PM), https://www.bloomberg.com/news/articles/2018-09-10/google-s-ai-cloud-star-leaves-after-
pentagon-deal-protests [https://perma.cc/7P5MX5LT].
349
https://profiles.stanford.edu/fei-fei-li
350
In an email, Professor Li encouraged the project to be kept under wraps and suggested that press releases
on the project not be focused on artificial intelligence. Kate Conger, Google Plans Not to Renew Its Contract
for Project Maven, a Controversial Pentagon Drone AI Imaging Program, GIZMODO (June 1, 2018, 2:38 PM),
https://gizmodo.com/google-plans-not-to-renew-its-contract-for-project-mave-1826488620.
[https://perma.cc/4RXA-PJB5]. Li wrote, “I think we should do a good PR story on the story of [Department
of Defense] collaborating with [Google Cloud Platform] from a vanilla cloud technology angle (storage,
network, security, etc.), but avoid at ALL COSTS any mention or implication of [artificial intelligence].” Id.
351
Joshua Brustein, How One AI Startup Decided to Embrace Military Work, Despite Controversy, BLOOMBERG
(Dec. 6, 2018, 8:00 AM), https://www.bloomberg.com/news/articles/2018-12-06/how-one-ai-startup-
decided-to-embrace-military-work-despite-controversy [https://perma.cc/WHM8-2SKE]. See, e.g., Janet
Burns, Google Employees Resign Over Company’s Pentagon Contract, Ethical Habits, FORBES (May 14, 2018
12:46 PM), https://www.forbes.com/sites/
janetwburns/2018/05/14/google-employees-resign-over-firms-pentagon-contract-
ethicalhabits/#7dd6f2a54169 [https://perma.cc/E2GF-K9CC]. The “mass resignations . . . speak to the
strongly felt ethical concerns of the employees who are departing.” Kate Conger, Google Employees Resign in
Protest Against Pentagon Contract, GIZMODO (May 14, 2018, 6:00 AM), https://gizmodo.com/google-
employees-resign-in-protest-against-pentagon-con-1825729300 [https://perma.cc/6Y5D-TJHU] [hereinafter
Conger, Google Employees Resign] (citing reasons why employees resigned from Google over Project Maven,
including being at odds with what they understood the company to stand for and feeling as though their
concerns were unaddressed by management, to name a few). Over ninety academics in artificial intelligence,
ethics, and computer science also released an open letter to urge Google to end its work on Project Maven.
The letter reads in part: If ethical action on the part of tech companies requires consideration of who might
benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves
more sober reflection—no technology has higher stakes—than algorithms meant to target and kill at a
distance and without public accountability. Id. The letter continued, “While Google regularly decides the

55
between those who answer to shareholders, who want to get access to Defense Department contracts worth
multimillions of dollars, and the rank and file who have to build the things and who feel morally complicit for things
they don’t agree with .352

Google did take action to address their employees’ concerns. 353 Ultimately, Google did not renew its contract with
the U.S. Department of Defense. 354 After Google said it would not renew its contract with the Pentagon, it put forth
a list of ethical principles governing its use of artificial intelligence. 355These principles stated that Google would
utilize artificial intelligence “only in ‘socially beneficial’ ways that would not cause harm and promised to develop
its capabilities in accordance with human rights law.” 356

However, Google Senior VP Kent Walker, states that Google’s withdrawal from the military’s Project Maven was a
one-time “reset” that will not hinder growing cooperation on a wide range of other projects with DOD. 357
Nonetheless, the case shows a growing trend of employee regulation of the tech sector. Project Maven shows the
influence, that not only the big tech chiefs, 358 but their employees have over shaping AI ethics. 359

Jennifer S. Fan, 360Assistant Professor of Law at the University of Washington School of Law, and Director of their
Entrepreneurial Law Clinic, documents the mounting public concern over the influence that high technology
companies have in our society. In the past, these companies were lauded for their innovations, but now as one
scandal after another has tarnished them, from being a conduit in influencing elections to the development of

future of technology without democratic public engagement, its entry into military technologies casts the
problems of private control of information infrastructure into high relief.”
352
See, e.g., Zachary Fryer-Biggs, Inside the Pentagon’s Plan to Win over Silicon Valley’s AI Experts, WIRED
(Dec. 21, 2018, 7:26 AM), https://www.wired.com/story/insidethe-pentagons-plan-to-win-over-silicon-
valleys-ai-experts/ [https://perma.cc/QRH9-84WE] (discussing Google’s withdrawal from Pentagon program
to utilize artificial intelligence software in warfare); Jacob Silverman, Tech’s Military Dilemma: Silicon Valley’s
Emerging Role in America’s Forever War, NEW REPUBLIC (Aug. 7, 2018),
https://newrepublic.com/article/148870/techs-military-dilemma-silicon-valley [https://perma.cc/GX8M-UJR8]
(discussing military contracting—including drones, imagery data analytics, and artificial intelligence—among
leading technology companies).
353
Some observers even went so far as to claim that employees are more powerful than management when
they act collectively as Google employees did when they participated in a walkout. See Geoffrey James, Why
the Google Walkout Terrifies the Tech Moguls, INC. (Nov. 8, 2018), https://www.inc.com/geoffreyjames/why-
google-walkout-terrifies-tech-moguls.html [https://perma.cc/V3R8-D6RP].
354
Sam Harnett, Google Employees Quit in Protest Over Military Artificial Intelligence Program, KQED NEWS
(May 17, 2018), https://www.kqed.org/news/11668872/googleemployees-quit-in-protest-over-military-
artificial-intelligence-program [https://perma.cc/P3MM-7C9Y] [hereinafter Harnett, Google Employees Quit].
Sam Harnett, In a Direct Challenge to Their Employers, Tech Workers Begin to Organize, KQED NEWS (July
6,2018), https://www.kqed.org/news/11679302/in-a-direct-challenge-to-their-employerstech-workers-begin-
to-organize [https://perma.cc/72CB-5XV3].
355
Kate Conger & Daisuke Wakabayashi, Google Employees Protest Secret Work on Censored Search Engine
for China, N.Y. TIMES (Aug. 16, 2018), https://www.nytimes.com/ 2018/08/16/technology/google-
employees-protest-search-censored-china.html [https://perma.cc/7AGX-UW6Z] [hereinafter Conger &
Wakabayashi, Google Employees Protest] (discussing Google’s promulgation of internal AI Principles
following termination of Project Maven).
356
Artificial Intelligence at Google: Our Principles, GOOGLE,https://ai.google/principles/
[https://perma.cc/U22Z-T7CN]
357
SYDNEY J. FREEDBERG JR, Google To Pentagon: ‘We’re Eager To Do More’, Breaking Defense, (November
05, 2019) https://breakingdefense.com/2019/11/google-pentagon-pledge-to-work-together-were-eager-to-
do-more/
358
Jennifer S. Fan, Woke Capital: The Role of Corporations in Social Movements, 9 Harv. Bus. L. Rev. 441-94
(2019). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3556120
359
Fan, Jennifer S. (2020) "Employees as Regulators: The New Private Ordering in High Technology
Companies," Utah Law Review: Vol. 2019 : No. 5 , Article 2. Available at:
https://dc.law.utah.edu/ulr/vol2019/iss5/2
360
https://www.law.uw.edu/directory/faculty/fan-jennifer-s

56
weaponized artificial intelligence, to their own moment of reckoning with the #MeToo movement, these same
companies are under scrutiny.

High technology companies were largely unfettered by regulators, with the exception of the Securities and
Exchange Commission’s oversight of public companies. But employees at high technology companies are speaking
out in protest about their respective employers’ actions and changing private regulating as we know it. In essence,
employees are holding companies accountable for the choices they make, whether it is what area to work (or not
work) in or eliminating a practice that has systemic implications, such as mandatory arbitration provisions for sexual
misconduct cases. Fan analyzes how employees in high technology companies have redefined the contours of
private ordering and, in the process, have also demonstrated what collective action looks like. Because these
workers are in high demand and short supply, they are able to affect private regulation in a way that we have not
seen before. As a result, they have the potential to be an important check on the high technology sector. 361

The military AI ethics controversy was not limited to public companies. Clarifai, Inc., a private company focused
on artificial intelligence and machine learning, faced criticism from its employees about taking on work with the
military. As a result, it created a subsidiary, Neural Net One, after the Department of Defense hired it to work on
Project Maven. It was a controversial decision among employees. “Four former employees said Zeiler’s [the CEO’s]
lack of candor about the project damaged morale, complicated recruitment, and undermined trust within the
company.” At least two employees left Clarifai due to concerns about the company’s focus on military work.
Although startups can ill afford to lose employees, especially ones with highly sought-after technical expertise, it
appears that the financial rewards outweighed the company’s ethical concerns raised through employee initiated
private ordering. 362

Alphabet acknowledged in its “Risk Factors” section of its Form 10-K for the 2018 year, which ended on December
31, 2018, that the implementation of artificial intelligence software in many of its products could bring “ethical,
technological, legal, and other challenges . . . .”363

Likewise, Microsoft, in its Form 10-K for the fiscal year, which ended on June 30, 2019, had cautionary language
regarding its use of artificial intelligence in its business offerings: “If we enable or offer [artificial intelligence]
solutions that are controversial because of their impact on human rights, privacy, employment, or other social
issues, we may experience brand or reputational harm.” 364

There may be a correlation between this language being placed in “Risk Factors” and the backlash from employees
that Alphabet and Microsoft witnessed in 2018 due to their interactions with the Department of Defense 365and
Immigration and Customs Enforcement (“ICE”). 366

Over time, it may become commonplace for other high technology companies to make similar disclosures.

361
Jennifer S. Fan, Employees as Regulators: The New Private Ordering in High Technology Companies, 2019
Utah L. Rev. 973-1026.
362
Fan, Jennifer S., Employees as Regulators: The New Private Ordering in High Technology Companies
(December 1, 2019). Utah Law Review, Vol. 2019, No. 5, Pp. 973-1076, 2019, Available at SSRN:
https://ssrn.com/abstract=3520230 (citations omitted)
363
Alphabet Inc., Annual Report (Form 10-K) at 7 (Feb. 5, 2019) [hereinafter Alphabet Form 10-K]. Pursuant to
Section 13 or 15(d) of the Securities Exchange Act of 1934, public companies are required to provide an
annual report on Form 10-K within a specified time period after the end of the fiscal year covered by the
report which provides a comprehensive overview of the company’s business and financial condition; it also
includes audited financial statements. Will Kenton, 10-K, INVESTOPEDIA (June 1, 2019),
https://www.investopedia.com/terms/1/10-k.asp [https://perma.cc/8XMB-7CMY]; see also U.S. SEC. &
EXCH. COMM’N, FORM 10-K: GENERAL INSTRUCTIONS (2019), https://www.sec.gov/about/forms/form10-
k.pdf [https://perma.cc/4FL4-CNQ4].
364
Microsoft Corp., Annual Report (Form 10-K) 22 (Aug. 1, 2019).
365
See Harnett, Google Employees Quit, supra.
366
Tom Warren, Microsoft CEO Plays Down ICE Contract in Internal Memo to Employees, VERGE (June 20,
2018, 4:49 AM), https://www.theverge.com/2018/6/20/17482500/microsoft-ceo-satya-nadella-ice-contract-
memo [https://perma.cc/4Y2M-BPBV]

57
Amazon and Rekognition

Amazon provides a case study that contrasts with Google’s. When Amazon decided to sell its facial recognition
software, Rekognition, to law enforcement, 367 over 450 Amazon employees signed an open letter to CEO Jeff
Bezos and other Amazon executives on a mailing list called “We Won’t Build It,” “demanding that the company
Palantir, the software firm the operates much of Immigration and Customs Enforcement’s deportation and tracking
program, be banned from Amazon Web Services and that Amazon implement employee oversight for ethical
decisions.” 368 The letter asked Amazon to cease selling Rekognition to police, stating, “[o]ur company should not
be in the surveillance business; we should not be in the policing business; we should not be in the business of
supporting those who monitor and oppress marginalized populations.” 369 In November 2018, Amazon addressed
its relationship with law enforcement at an all-staff meeting that was live-streamed, 370but none of the employee
demands were met. Although employee actions did not result in the hoped-for employee-initiated demand to stop
Amazon’s sale of the controversial software, the issue became relevant and publicized again in early 2019. In
January 2019, through a resolution organized by Open MIC, a nonprofit organization focused on corporate
development, and filed by the Sisters of St. Joseph of Brentwood, a member of the Tri-State Coalition for
Responsible Investment, shareholders of Amazon filed a letter with the company demanding that Amazon cease
sales of facial recognition software to government agencies. 371

According to Open MIC, “[t]he shareholder resolution echoes concerns of over 70 civil rights and civil liberties
groups, hundreds of Amazon’s own employees, and 150,000 people who signed a petition—all seeking to end sales
of Rekognition to government agencies.” 372 An employee anonymously posted a letter online, outlining his or her

367
Alexa Lardieri, Amazon Employees Protesting Sale of Facial Recognition Software, U.S. NEWS &WORLD REP.
(Oct. 18, 2018, 2:57 PM), https://www.usnews.com/news/politics/articles/2018-10-18/amazon-employees-
protesting-sale-of-facial-recognition-software [https://perma.cc/R9YB-W6C8].
368
Id. (Palantir is the software firm the operates much of Immigration and Customs Enforcement’s deportation
and tracking program.)
369
James Vincent, Amazon Employees Protest Sale of Facial Recognition Software to Police, THE VERGE (June
22, 2018, 5:29 AM), https://www.theverge.com/2018/6/22/17492106/amazon-ice-facial-recognition-internal-
letter-protest [https://perma.cc/FP8H-JJ38] (setting forth full letter to Mr. Bezos). The American Civil
Liberties Union also voiced concerns about the software’s inaccuracies in racial profiling, finding that it
“incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for
a crime and that the false matches disproportionately involved people of color, including six members of the
Congressional Black Caucus.” Savia Lobo, Amazon Addresses Employees Dissent Regarding the Company’s
Law Enforcement Policies at an All-staff Meeting, in a First, PACKT (Nov. 9, 2018, 9:16 AM),
https://hub.packtpub.com/amazonaddresses-employees-dissent-regarding-the-companys-law-enforcement-
policies-at-an-allstaff-meeting-in-a-first/ [https://perma.cc/UUP8-UUK6].
370
Savia Lobo, Amazon Addresses Employees Dissent Regarding the Company’s Law Enforcement Policies at an
All-staff Meeting, in a First, PACKT (Nov. 9, 2018, 9:16 AM), Questions were pre-screened. Andy Jassy, CEO
of Amazon Web Services, stated: There’s a lot of value being enjoyed from Amazon Rekognition. Now now,
of course, with any kind of technology, you have to make sure that it’s being used responsibly, and that’s
true with new and existing technology. Just think about all the evil that could be done with computers or
servers and has been done, and you think about what a different place our world would be if we didn’t allow
people to have computers. Id. But cf. Erin Corbett, Tech Companies Are Profiting Off ICE Deportations,
Report Shows, FORTUNE (Oct. 23, 2018, 11:06 AM), http://fortune.com/2018/10/23/techcompanies-
surveillance-ice-immigrants/ [https://perma.cc/WW8T-QE6U] (“Amazon receives millions of dollars to host
Palantir, as well as backups of DHS’s vast database of biometric information on its web servers . . . . The two
companies are dominating the market to meet the federal government’s data storage needs, building an
increasingly effective deportation and incarceration infrastructure for the Trump administration, activists
say.”).
371
Shareholders Press Amazon to Stop Selling Racially Biased Surveillance Tech to Government, OPEN MIC (Jan.
17, 2019), https://www.openmic.org/news/2019/1/16/haltrekognition [https://perma.cc/2VVD-C7VQ].
372
Id. (“In one test, Rekognition technology disproportionally misidentified African American and Latino
members of the U.S. Congress as people in criminal mug shots . . . .”).

58
concerns about Rekognition. 373 Amazon was also considered the front-runner for the Joint Enterprise Defense
Initiative (“JEDI”) after Google decided not to submit a bid because it “‘couldn’t be assured’ that the work in
connection with the JEDI contract ‘would align with [Google’s artificial intelligence] Principles,’ among other things.”
374
The contract was worth $10 billion over ten years. 375 Amazon employees did not write an open letter of protest
when Amazon’s bid was submitted, but Microsoft employees did when Microsoft submitted its ultimately winning
JEDI bid. 376

In November 2018, Microsoft won a $480 million contract for the United States Army to supply prototypes for
augmented reality systems (the HoloLens) that would be utilized on combat missions and in training. 377 “The
contract, which could eventually lead to the military purchasing over 100,000 headsets, is intended to ‘increase
lethality by enhancing the ability to detect, decide and engage before the enemy,’ according to a government

373
The letter argues that “Amazon is designing, marketing, and selling a system for
dangerous mass surveillance right now.” An Amazon Employee, I’m an Amazon Employee. My Company
Shouldn’t Sell Facial Recognition Tech to Police, MEDIUM (Oct. 16, 2018),
https://medium.com/s/powertrip/im-an-amazon-employee-my-company-shouldn-t-sellfacial-recognition-tech-
to-police-36b5fde934ac [https://perma.cc/6LQS-HVQE].
374
Paris Martineau, How the Pentagon’s Move to the Cloud Landed in the Mud, WIRED (Oct. 10, 2018),
https://www.wired.com/story/how-pentagons-move-to-cloud-landed-inmud/ [https://perma.cc/VE2N-759M];
see also Artificial Intelligence at Google, supra note . Google’s objectives for artificial intelligence include: Be
socially beneficial. . . . Avoid creating or reinforcing unfair bias. . . . Be built and tested for safety. . . . Be
accountable to people. . . . Incorporate privacy design principles. . . . Uphold high standards of scientific
excellence. . . . Be made available for uses that accord with these principles. Id.
375
Ron Miller, New Conflict Evidence Surfaces in JEDI Cloud Contract Procurement Process, TECHCRUNCH (Feb.
20, 2019), https://techcrunch.com/2019/02/20/new-conflictevidence-surfaces-in-jedi-cloud-contract-
procurement-process/ [https://perma.cc/SLH5-EPFJ]. Oracle filed a court case alleging that the Department
of Defense procurement process which would only be awarded to one vendor was flawed and unfairly
favored Amazon, citing an ex-employee of Amazon who had influence over the process. Complaint, Oracle
America, Inc. v. United States, No. 1:18-cv-01880-EGB -1880C, 2019 BL 276759 (Fed. Cl. July 19, 2019)
(complaint sealed); see Ron Miller, Oracle Is Suing the US Government Over $10B Pentagon JEDI Cloud
Contract Process, TECHCRUNCH (Dec. 12, 2018), https://techcrunch.com/2018/12/12/oracle-is-suing-the-u-
s-government-over-10b-pentagonjedi-cloud-contract-process/ [https://perma.cc/24H4-8KWS]; Christian
Davenport & Aaron Gregg, Pentagon to Review Amazon Employee’s Influence over $10 Billion Government
Contract, SEATTLE TIMES (Jan. 24, 2019, 2:20 PM), https://www.seattletimes.com/business/pentagon-to-
review-amazon-employees-influence-over-10-billion-government-contract/[https://perma.cc/N7HG-FJKX].
376
Employees of Microsoft, An Open Letter to Microsoft: Don’t Bid on the US Military’s Project JEDI, MEDIUM
(Oct. 12, 2018), https://medium.com/s/story/an-openletter-to-microsoft-dont-bid-on-the-us-military-s-
project-jedi-7279338b7132 [https://perma.cc/HX4D-Y78E] (Microsoft employee letter posted to Medium);
Frank Konkel, Microsoft, Amazon CEOs Stand By Defense Work After Google Bails on JEDI, NEXTGOV (Oct.
15, 2018), https://www.nextgov.com/it-modernization/2018/10/microsoft-amazon-ceos-standby-defense-
work-after-google-bails-jedi/152047/ [https://perma.cc/UFT3-JWQL] (“‘One of the jobs of a senior leadership
team is to make the right decision even when [it] is unpopular,’ Amazon CEO Jeff Bezos said Monday at the
WIRED25 summit. ‘If big tech companies are going to turn their back on the Department of Defense, then
this country is going to be in trouble.’”); Brad Smith, Technology and the U.S. Military, MICROSOFT (Oct. 26,
2018), https://blogs.microsoft.com/on-the-issues/2018/10/26/technology-and-the-us-
military/[https://perma.cc/W2X7-7X8G] (responding to employees’ concerns about Microsoft’s work with the
military); Mark Wycislik-Wilson, Microsoft Employees Use Open Letter to Urge Company Not to Get Involved
in JEDI Military Project, BETANEWS (Oct. 15, 2018), https://betanews.com/2018/10/15/microsoft-do-not-
bid-on-jedi/ [https://perma.cc/2Z3W-9QP5]. Microsoft was awarded the JEDI contract and Amazon is
protesting the decision. Wayne Rash, Amazon’s Protest of Microsoft JEDI Award is No Surprise, FORBES
(Nov. 15, 2019), https://www.forbes.com/sites/waynerash/2019/11/15/amazon-announces-protest-
tomicrosoft-jedi-award/#6b0ebd7a4342 [https://perma.cc/JRJ2-QDH9].
377
Joshua Brustein, Microsoft Wins $480 Million Army Battlefield Contract, BLOOMBERG (Nov. 28, 2018, 1:53
PM), https://www.bloomberg.com/news/articles/2018-11-28/microsoft-wins-480-million-army-battlefield-
contract[https://perma.cc/UH65-M8YN].

59
description of the program.” 378The number of headsets that the Army intended to purchase would have been more
than the number of HoloLens sold to date. 379

On February 22, 2019, a few days before the introduction of the second version of the HoloLens, which Microsoft
described “as a productivity tool for professionals in fields like architecture and engineering, or as an entertainment
device,” 380 Microsoft employees circulated a letter addressed to Microsoft CEO, Satya Nadella and Microsoft
President and Chief Legal Officer, Brad Smith. 381 The letter stated, “We are alarmed that Microsoft is working to
provide weapons technology to the U.S. Military, helping one country’s government ‘increase lethality’ using tools
we built . . . We did not sign up to develop weapons, and we demand a say in how our work is used.”382

The letter called for Microsoft to cancel the contract, publish a policy that set out the acceptable uses for its
products, appoint an independent ethics board to enforce such a policy. 383 In response, a Microsoft spokesman
emailed a statement that said, “We always appreciate feedback from employees and have many avenues for
employee voices to be heard[.]” 384 In a blog post on October 2018, Brad Smith stated that Microsoft would
continue to sell software to the U.S. military as it had in the past; employees with ethical concerns could move to
a different team or project. 385 However, employees did not believe that this option of “talent mobility” was sufficient
as it “ignore[d] the problem that workers [were] not properly informed of the use of their work.” 386Nadella said
that Microsoft would continue its military contract with HoloLens. “We made a principled decision that we’re not
going to withhold technology from institutions that we have elected in democracies to protect the freedoms we
enjoy. We were very transparent about that decision and we’ll continue to have that dialogue [with employees].” 387

It is likely the case that Microsoft’s management team made its decision based on a business calculation of how
much influence this particular subset of employees had. 388

Microsoft and AnyVision

In July of 2018, Brad Smith, the president of Microsoft shared the corporation's views about the need for
government regulations and responsible industry measures to address advancing facial recognition technology.
Smith noted that facial recognition technology raises issues that go to the heart of fundamental human rights

378
Id.
379
Joshua Brustein & Dina Bass, Microsoft Workers Call on Company to Cancel Military Contract, BLOOMERG
(Feb. 22, 2019, 2:00 PM), https://www.bloomberg.com/news/articles/2019-02-22/microsoft-workers-call-on-
company-to-cancel-military-contract [https://perma.cc/LLS4-5WLW].
380
Id. The military version would include night vision, thermal sensing, and technology that could be used to
monitor for concussions. Id.
381
Id. “Internal opposition has become a persistent issue for consumer technology companies looking to sell
products for military and law enforcement use.” Id.
382
Id.
383
Id.
384
Id.
385
Brad Smith, Technology and the U.S. Military, MICROSOFT (Oct. 26, 2018), https://blogs.microsoft.com/on-
the-issues/2018/10/26/technology-and-the-us-military/[https://perma.cc/W2X7-7X8G] ((“As is always the
case, if our employees want to work on a different project or team—for whatever reason—we want them to
know we support talent mobility.”).
386
Joshua Brustein & Dina Bass, Microsoft Workers Call on Company to Cancel Military Contract, BLOOMERG
(Feb. 22, 2019, 2:00 PM), https://www.bloomberg.com/news/articles/2019-02-22/microsoft-workers-call-on-
company-to-cancel-military-contract[https://perma.cc/LLS4-5WLW].
387
Nick Bastone, Despite Internal Uproar, Microsoft CEO Satya Nadella Says the Company Is Not Cancelling Its
Contract with the US Army, BUS. INSIDER (Feb. 25, 2019,4:37 PM),
https://www.businessinsider.com/nadella-says-microsoft-will-not-withdrawfrom-us-army-hololens-contract-
2019-2 [https://perma.cc/QV7N-8ZRT].
388
Fan, Jennifer S., Employees as Regulators: The New Private Ordering in High Technology Companies
(December 1, 2019). Utah Law Review, Vol. 2019, No. 5, at 1005, 2019, Available at SSRN:
https://ssrn.com/abstract=3520230

60
protections like privacy and freedom of expression and that these issues heighten responsibility for tech companies
that create these products. 389

Then in December 2018, Smith laid out Microsoft’s ‘Facial recognition Principles’, which encompass: Fairness,
transparency, Accountability, Non-discrimination, Notice and Consent and Lawful Surveillance. 390 In contradiction
with their own guiding principles on facial recognition technology, in June of 2019, Microsoft’s M12 venture capital
arm announced it was joining American and European companies, including LightSpeed Venture Partners, Robert
Bosch and Qualcomm Ventures in a $78 million Series A funding round for AnyVision.391 Considering that Microsoft
published their six ethical principles to govern its use of facial recognition technology prior to its decision to fund
AnyVision, it was particularly in violation of their sixth principle which states, “We (Microsoft) will advocate for
safeguards for people’s democratic freedoms in law enforcement surveillance scenarios and will not deploy facial
recognition technology in scenarios that we believe will put these freedoms at risk." 392

AnyVision is an Israeli company, which is headquartered in Israel but has offices in the United States, the United
Kingdom and Singapore. It sells an “advanced tactical surveillance” software system, Better Tomorrow, which lets
customers identify individuals and objects in any live camera feed, such as a security camera or a smartphone, and
then track targets as they move between different feeds. 393

Unsurprisingly, the AnyVision investment led to public outcry by activists and civil society actors, who pointed to
evidence that AnyVision has been identified as wielding its software to help enforce Israel's military occupation. 394
MPower Change and Jewish Voices for Peace pressured Microsoft to stop funding AnyVision, which was surveilling
Palestinians in the West Bank. A petition from the groups gathered more than 75,000 signatures, and the activists
partnered with Microsoft workers to deliver the petition to Microsoft's Redmond campus. 395 In October 2019, after
an investigative report into the deal broke in NBC News, 396 Microsoft responded and decided to investigate whether
the use of facial recognition technology developed by AnyVision complied with its ethics and principles. 397

In late 2019, Microsoft hired Eric Holder and his team at the law firm of Covington & Burling to conduct an audit
of AnyVision to determine whether it complies with Microsoft's ethical principles on how the biometric surveillance
technology should be used.398 At that time Microsoft spokesman said, "If we discover any violation of our principles,

389
Smith, B. (2018, July 13). Facial recognition technology: The need for public regulation and corporate
responsibility. Retrieved from: https://blogs.microsoft.com/on-the-issues/2018/07/13/facial-recognition-
technology-the-need-for-public-regulation-and-corporate-responsibility/
390
Sauer, R. (2018, December 17). Six principles to guide Microsoft's facial recognition work. Microsoft.
Retrieved from: https://blogs.microsoft.com/on-the-issues/2018/12/17/six-principles-to-guide-microsofts-
facial-recognition-work/
391
Brewster, T. (2019, August 1). Microsoft Slammed For Investment In Israeli Facial Recognition ‘Spying On
Palestinians.’ Forbes. Retrieved from: https://www.forbes.com/sites/thomasbrewster/2019/08/01/microsoft-
slammed-for-investing-in-israeli-facial-recognition-spying-on-palestinians/#33e978e46cec
392
Microsoft set to divest from Israeli facial recognition firm tracking Palestinians (2020, March 28). Middle East
Eye. Retrieved from: https://www.middleeasteye.net/news/microsoft-set-divest-israeli-facial-recognition-firm-
tracking-palestinians
393
Facial Recognition Technology & Palestinian Digital Rights, 7amleh - the Arab Center for Social Media
Development, (May 21, 2020) https://7amleh.org/2020/05/21/facial-recognition-technology-and-palestinian-
digital-rights
394
Id.
395
Emily Birnbaum, Microsoft employees are pushing for change. Will it matter? (June 10, 2020)
https://www.protocol.com/microsoft-employee-protest-police-contracts
396
Solon, O. (2019, October 26). Why did Microsoft fund an Israeli firm that surveils West Bank citizens? NBC
News. Retrieved from: https://www.nbcnews.com/news/all/why-did-microsoft-fund-israeli-firm-surveils-west-
bank-palestinians-n1072116
397
Dastin, J. (2019, November 16). Microsoft to probe work of Israeli facial recognition startup it funded.
Reuters. Retrieved from: https://www.reuters.com/article/us-microsoft-AnyVision/microsoft-to-probe-work-of-
israeli-facial-recognition-startup-it-funded-idUSKBN1XQ03M
398
Solon, O. (2019, November 16). Microsoft hires Eric Holder to audit AnyVision overuse of facial recognition on
Palestinians According to five sources, AnyVision's technology has powered a secret military surveillance

61
we will end our relationship." In response to the audit, AnyVision said at the same time, “All of our installations
have been examined and confirmed against not only Microsoft's ethical principles, but also our own internal rigorous
approval process."399

In March 2020, Microsoft and AnyVision published a joint statement: “After careful consideration, Microsoft and
AnyVision have agreed that it is in the best interest of both enterprises for Microsoft to divest its shareholding in
AnyVision.” 400 It is important to note that the findings of the audit did not find evidence to support the allegations
about a mass surveillance program in the West Bank, and concluded that AnyVision had not violated its facial
recognition pledge. Despite the positive outcome from the audit, there are many who point to the fact that much
of the project information related to AnyVision's work was deemed as related to Israeli national security and
therefore was not accessible by Holder and his team. However, AnyVision did acknowledge that its technology has
been deployed at border checkpoints between the West Bank and Israel. 401 Even so, Microsoft said that as a result
of the probe it decided to exit the business of investing in facial recognition startups altogether. “For Microsoft, the
audit process reinforced the challenges of being a minority investor in a company that sells sensitive technology,
since such investments do not generally allow for the level of oversight or control that Microsoft exercises over the
use of its own technology,” Microsoft and AnyVision said in a joint statement posted on M12’s website. 402

Microsoft’s divesting from AnyVision is one of most high-profile instances of Microsoft taking action in response to
protests.

Google and Project Dragonfly

In 2010, Google withdrew operations from China. Sergey Brin, one of the co-founders of Google, explained at the
time that Google withdrew from China because it “objected to the country’s ‘totalitarian’ policies when it came to
censorship, political speech and internet communications.” 403 In August 2018, however, employees discovered that
Google planned to return to China under Project Dragonfly. 404 In response, over one thousand Google employees

project that has monitored Palestinians in the West Bank. Retrieved from
https://www.nbcnews.com/tech/security/microsoft-hires-eric-holder-audit-AnyVision-over-use-facial-
recognition-n1083911
399
Solon, O. NBC News. Why did Microsoft fund an Israeli firm that surveils West Bank citizens? (2019, October
26). Retrieved from https://www.nbcnews.com/news/all/why-did-microsoft-fund-israeli-firm-surveils-west-
bank-palestinians-n1072116
400
Joint Statement by Microsoft & AnyVision – AnyVision Audit. (2020, March 27). Retrieved from
https://m12.vc/news/joint-statement-by-microsoft-AnyVision-AnyVision-audit/
401
Microsoft Divests from AnyVision After Audit into Alleged Mass Surveillance Program. (2020, March 31).
Retrieved from https://findbiometrics.com/microsoft-divests-AnyVision-following-audit-into-alleged-mass-
surveillance-program-033104/
402
Id.
403
Ben Worthen, Soviet-Born Brin Has Shaped Google’s Stand on China, WALL ST. J. (Mar. 12, 2010, 12:01AM),
https://www.wsj.com/articles/SB10001424052748703447104575118092158730502[https://perma.cc/V3TK-
JERY] (explaining that Sergey Brin, one of the co-founders of Google, who was greatly influenced by his time
living in a “totalitarian” regime in the Soviet Union, played a role in Google pulling out of China); Steve Lohr,
Interview: Sergey Brin on Google’s China Move, N.Y. TIMES: BITS (Mar. 22, 2010, 5:37
PM),https://bits.blogs.nytimes.com/2010/03/22/interview-sergey-brin-on-googles-china-
gambit.[https://perma.cc/3ZBV-GSJS].
404
Kate Conger & Daisuke Wakabayashi, Google Employees Protest Secret Work on Censored Search Engine for
China, N.Y. TIMES (Aug. 16, 2018), https://www.nytimes.com/ 2018/08/16/technology/google-employees-
protest-search-censored-china.html [https://perma.cc/7AGX-UW6Z; see generally Shannon Liao, China Is
Making the Internet Less Free, and US Tech Companies Are Helping, VERGE (Nov. 2, 2018, 9:00 AM),
https://www.theverge.com/2018/11/2/18053142/china-internet-privacy-censorship-apple-microsoft-google-
democracy-report [https://perma.cc/ZBR6-9D6B]; Julie Makinen, Chinese Censorship Costing U.S. Tech Firms
Billions in Revenue, L.A. TIMES (Sept. 22, 2015, 2:00 AM), https://www.latimes.com/business/la-fichina-tech-
20150922-story.html [https://perma.cc/A3RR-734P].

62
signed a letter “protesting the company’s efforts to build a censored version of its search engine in China.” 405 More
specifically, the letter cited the need “for more transparency and consideration of the human rights issues involved,
as internet monitoring and collaboration with the Chinese government is used to stifle dissident voices and even
put activists’ personal information at risk.” 406 The letter continued, “currently we do not have the information
required to make ethically-informed decisions about our work, our projects, and our employment . . . Google
employees need to know what we’re building.” 407 This letter also outlined several steps Google could take to
address employee concerns by: allowing employees to take part in ethical reviews of the company’s products,
giving employees the ability to appoint external representatives for the purpose of transparency, and publishing an
ethical assessment of controversial projects. 408

Some Google employees resigned. 409 Jack Poulson, who was previously an assistant professor of mathematics at
Stanford and worked at Google in their research and machine intelligence department, was one of them. 410 He
wrote in his resignation letter, “‘I view our intent to capitulate to censorship and surveillance demands in exchange
for access to the Chinese market as a forfeiture of our values and governmental negotiating position across the
globe . . . .’” 411

In November 2018, over three hundred employees posted an online letter with Amnesty International calling for
Google to stop its work on Project Dragonfly. 412

Google is known as a company that “prizes internal transparency but considers leaking information to be not
‘Googley.’” 413The letter reads in part:

Many of us accepted employment at Google with the company’s values in mind, including its previous position on
Chinese censorship and surveillance, and an understanding that Google was a company willing to place its values
above its profits. After a year of disappointments including Project Maven, Dragonfly, and Google’s support for
abusers, we no longer believe this is the case. This is why we’re taking a stand.

We join with Amnesty International in demanding that Google cancel Dragonfly. We also demand that leadership
commit to transparency, clear communication, and real accountability. Google is too powerful not to be held
accountable. We deserve to know what we’re building and we deserve a say in these significant decisions.” 414

405
Shannon Liao, Google Employees Are Protesting the Company’s Secrecy Over Censored Search Engine in
China, VERGE (Aug. 16, 2018, 3:50 PM EDT), https://www.theverge.com/2018/8/16/17702464/google-
search-censorship-china-protestproject-dragonfly; [https://perma.cc/CH7N-ZVHQ]; Julia Carrie Wong, Google
Employees Sign Letter Against Censored Search Engine for China, GUARDIAN (Nov. 27, 2018, 9:00 AM),
https://www.theguardian.com/technology/2018/nov/27/google-employees-lettercensored-search-engine-
china-project-dragonfly [https://perma.cc/ZAY7-YTA6] (“Project Dragonfly . . . would reportedly allow the
Chinese government to blacklist certain search terms and control air quality data . . . . More than 1,400
Google employees signed an internal petition criticizing the lack of transparency around the project, and at
least one employee resigned in protest.”).
406
Liao, supra
407
Id.
408
Conger & Wakabayashi, Google Employees Protest, supra
409
Ryan Gallagher, Senior Google Scientist Resigns over “Forfeiture of Our Values” in China, INTERCEPT (Sept.
13, 2018, 9:15 AM), https://theintercept.com/2018/09/13/google -china-search-engine-employee-resigns/
[https://perma.cc/WEC9-PLVD].
410
Id.
411
Id.
412
Google Employees Against Dragonfly, We Are Google Employees. Google Must Drop Dragonfly, MEDIUM
(Nov. 27, 2018), https://medium.com/@googlersagainstdragonfly/we-are-google-employees-google-must-
drop-dragonfly-4c8a30c5e5eb [https://perma.cc/P8BH-4XWX]
413
Wong, supra
414
Google Employees Against Dragonfly, supra note

63
Google suspended its work on Project Dragonfly. 415

Google’s Firing of its AI Ethics Researcher, Timnit Gebru

Less successful were the efforts of Timnit Gebru, the co-lead of Google’s ethical AI team, to publish her concerns
about bias in algorithms. In December 2020, she announced via Twitter that the company had fired her. 416 The
press reported that Gebru was fired over a paper that highlighted the risks of large language models, which are
key to Google's business. The paper was titled “On the Dangers of Stochastic Parrots: Can Language Models Be
Too Big? ”417 arguing the software, like a real parrot, doesn’t know the implications of the bad language it repeats.
The paper cited previous studies about ethical questions raised by large language models, including about the
energy consumed by the tens or even thousands of powerful processors required when training such software,
and the challenges of documenting potential biases in the vast data sets they were made with. BERT, Google’s
system, and OpenAI’s GPT-3 were mentioned. 418 Gebru, a widely respected leader in AI ethics research, is known
for coauthoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women
and people of color, which means its use can end up discriminating against them. 419 She also cofounded the Black
in AI affinity group, and champions diversity in the tech industry. The team she helped build at Google is one of
the most diverse in AI and includes many leading experts in their own right. Peers in the field envied it for producing
critical work that often challenged mainstream AI practices.

A series of tweets, leaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict
over another paper she coauthored. Jeff Dean, the head of Google AI, told colleagues in an internal email (which
he put online) that the paper “didn’t meet our bar for publication” and that Gebru had said she would resign unless
Google met a number of conditions, which it was unwilling to meet. 420 He cited the Google AI Principles, and listed
recent work at Google and elsewhere had done on mitigating bias in language models, which Gebru’s paper had
not taken into account. 421

415
Jen Copestake, Google China: Has Search Firm Put Project Dragonfly on Hold, BBC (Dec. 18, 2018),
https://www.bbc.com/news/technology-46604085 [https://perma.cc/U7AU-UKWZ]. In contrast to Google,
Facebook employees have stayed largely silent in the wake of criticism that has been leveled against the
company regarding its censorship. Will Oremus, Where’s the Facebook Walkout?, SLATE (Nov. 28, 2018,
11:52 PM), https://slate.com/technology/2018/11/facebook-workers-should-speak-up-about-their-company-
right-now.html [https://perma.cc/2NRV-2VBF].
416
Twitter, https://twitter.com/timnitgebru/status/1334352694664957952?lang=en
417
Emily M. Bender, Timnit Gebru et. al., On the Dangers of Stochastic Parrots: Can Language Models Be Too
Big? (March 2021) https://dl.acm.org/doi/10.1145/3442188.3445922
418
Singh, M. (2020). Google workers demand reinstatement and apology for fired Black AI ethics researcher.
Retrieved from https://www.theguardian.com/technology/2020/dec/16/google-timnit-gebru-fired-letter-
reinstated-diversity. See also Tom Simonite, What Really Happened When Google Ousted Timnit Gebru,
WIRED, (June 8, 2021) https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/ and
Karen Hao, We read the paper that forced Timnit Gebru out of Google. Here’s what it says. MIT Technology
Review (December 4, 2020) https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-
research-paper-forced-out-timnit-gebru/ The company's star ethics researcher highlighted the risks of large
language models, which are key to Google's business.
419
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018).
Datasheets for datasets. arXiv preprint arXiv:1803.09010.
420

https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQqz7ScqxhSIxeYGrWjK0/preview?pr
u=AAABdlPFwUU*hTbTTr0XLPQ2OrUZ4AEsUg
421
Id. (“We have a strong track record of publishing work that challenges the status quo -- for example, we’ve
had more than 200 publications focused on responsible AI development in the last year alone. Just a few
examples of research we’re engaged in that tackles challenging issues:
• Measuring and reducing gendered correlations in pre-trained NLP models
• Evading Deepfake-Image Detectors with White- and Black-Box Attacks

64
Online, many other leaders in the field of AI ethics are arguing that the company pushed her out because of the
inconvenient truths that she was uncovering about a core line of its research—and perhaps its bottom line. More
than 1,400 Google staff members and 1,900 other supporters have also signed a letter of protest. 422 Google's chief
executive Sundar Pichai apologized in the aftermath. 423

Google’s actions raise difficult and important questions about the company’s diversity initiatives. Yet there are also
serious accountability concerns: Who should be overseeing the deployment of artificial intelligence systems with
major societal implications? Because those systems are typically built with proprietary data and are often accessible
only to the employees of large technology companies, AI ethicists at these companies represent a key—sometimes
the only—check on whether they are being responsibly deployed. If a researcher like Gebru, an undisputed leader
in the field of AI ethics, cannot carry out that work within a company like Google, one of the leading U.S. developers
of AI, then who can?424

Employee Protest, Sexual Misconduct Claims, and Making Mandatory Arbitration Optional

#MeToo is often characterized as a bottom-up moral reckoning with the pervasiveness of sexual harassment and
sexual assault in modern society. 425The #MeToo Movement pushed gender issues came to the forefront of the
corporate ethics conversation. The elimination of mandatory arbitration 426 for sexual misconduct claims became a
lightning rod for action in the wake of the #MeToo Movement. 427 The fact became public, that executives accused
of sexual misconduct at Google received generous exit packages. 428 Andy Rubin, the father of the Android and

• Extending the Machine Learning Abstraction Boundary: A Complex Systems Approach to Incorporate
Societal Context
• CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims
• What Does AI Mean for Smallholder Farmers? A Proposal for Farmer-Centered AI Research [forthcoming]
• SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
• Accelerating eye movement research via accurate and affordable smartphone eye tracking
• The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
• Assessing the impact of coordinated COVID-19 exit strategies across Europe
• Practical Compositional Fairness: Understanding Fairness in Multi-Component Ranking Systems”)
422
https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-
6dadc300d382
423
Bobby Allyn, Google CEO Apologizes, Vows To Restore Trust After Black Scientist's Ouster, NPR, (Dec. 9.
2020) https://www.npr.org/2020/12/09/944686875/google-ceo-apologizes-vows-to-restore-trust-after-black-
scientists-ouster “"This loss has had a ripple effect through some of our least represented communities, who
saw themselves and some of their experiences reflected in Dr. Gebru's. It was also keenly felt because Dr.
Gebru is an expert in an important area of AI Ethics that we must continue to make progress on — progress
that depends on our ability to ask ourselves challenging questions," Pichai wrote.” Id.
424
Alex Engler, If not AI ethicists like Timnit Gebru, who will hold Big Tech accountable?, Brookings, (Dec. 17,
2020) https://www.brookings.edu/blog/techtank/2020/12/17/if-not-ai-ethicists-like-timnit-gebru-who-will-
hold-big-tech-accountable/
425
Wexler, Lesley M., #MeToo and Law Talk (November 2, 2019). University of Chicago Legal Forum, Vol. 29, No.
1, 2019, University of Illinois College of Law Legal Studies Research Paper No. 21-05, Available at SSRN:
https://ssrn.com/abstract=3723924
426
Mandatory arbitration agreements mean employers can shield themselves from court cases for both
individual and class action claims. For more on mandatory arbitration, see Fan, Jennifer S., Employees as
Regulators: The New Private Ordering in High Technology Companies (December 1, 2019). Utah Law Review,
Vol. 2019, No. 5, Pp. 973-1076, 2019, Available at SSRN: https://ssrn.com/abstract=3520230
427
See Kate Gibson, Tech Signals End of Forced Arbitration for Sexual Misconduct Claims, CBS
NEWS:MONEYWATCH (Nov. 16, 2018, 7:29 AM), https://www.cbsnews.com/news/tech-signals-end-of-forced-
arbitration-for-sexual-misconduct-claims/ [https://perma.cc/4PPF-389U].
428
Daisuke Wakabayashi & Katie Benner, How Google Protected Andy Rubin, the ‘Father of Android,’ N.Y. TIMES
(Oct. 25, 2018), https://www.nytimes.com/2018/10/25/ technology/google-sexual-harassment-andy-
rubin.html [https://perma.cc/2R7R-TA2W] (“The internet giant paid Mr. Rubin $90 million and praised him,
while keeping silent about a misconduct claim.”). Google’s parent company is Alphabet Inc. See Larry Page, G

65
senior executive, who had received a $90 million termination package even though the sexual harassment charges
against him had proven credible. 429 Many blamed mandatory arbitration provisions 430 in employment contracts and
systemic problems with reporting mechanisms for sexual misconduct for creating an environment where lack of
transparency was the norm. In response to the company’s handling of sexual misconduct cases, over 20,000 Google
employees across the globe walked out of their offices on November 1, 2018. 431 The organizers behind the walkout,
known as the Google Walkout For Real Change (“Google Walkout”), had a list of demands. 432

Ultimately, the company met the following of these demands: making arbitration optional for individual sexual
harassment and sexual assault claims; overhauling the reporting process for harassment and assault; having
consequences if employees did not complete sexual harassment training (i.e., it would affect employees’
performance reviews); ensuring that Google’s contractors were also subject to the company’s rights and
responsibilities regarding sexual misconduct; and having increased transparency about reported incidents of sexual
harassment and assault at the company. 433

Is for Google, ALPHABET, https://abc.xyz/ [https://perma.cc/2SYB-KWJ8] (last visited Jan. 17,2019)


(description of Alphabet’s business model related to Google by Google co-founder and Alphabet Chief
Executive Officer (“CEO”) Larry Page).
429
See Wakabayashi & Benner, supra . Google’s payout to Rubin was part of the total $135 million that Google
agreed to pay to Rubin and former executive Amit Singhal after they left the company amid sexual
harassment charges. Rob Copeland, Google Agreed to Pay $135 Million to Two Executives Accused of Sexual
Harassment, WALL ST. J. (Mar.
11, 2019, 8:52 PM), https://www.wsj.com/articles/google-agreed-to-pay-135-million-totwo-executives-accused-
of-sexual-harassment-11552334653 [https://perma.cc/WW9WKSY4].
430
Mandatory arbitration provisions in employment agreements require employees to pursue their legal claims,
such as those based on Title VII of the Civil Rights Act, the American Disabilities Act, the Family and Medical
Leave Act, and the Fair Labor Standards Act, through the arbitration procedure set forth in the agreement,
instead of through the courts; it involves employment laws set forth in statutes. ALEXANDER J.S. COLVIN,
ECON. POL’Y INST., THE GROWING USE OF MANDATORY ARBITRATION 2–3 (Apr. 6, 2018),
https://www.epi.org/files/pdf/144131.pdf [https://perma.cc/S77C-67CW] [hereinafter 2018 COLVIN STUDY].
This differs from the type of labor arbitration systems in disputes between labor unions and management,
which is a bilateral system run by unions and management and involves the enforcement of a contract
privately negotiated between a union and employer. Id. In a recent 5-4 ruling by the U.S. Supreme Court,
Epic Sys. Corp. v. Lewis, 138 S. Ct. 1612 (2018), the Court ruled that employers can lawfully require workers
to waive class and collective action waivers and settle employment disputes through individual arbitration;
this effectively eliminates the right of an employee to file a class action. According to a study by the
Economic Policy Institute, approximately 60 million workers in the United States are subject to mandatory
arbitration provisions with their employers. See also Jena McGregor, ‘A Nail in the Coffin’: What the Supreme
Court’s Decision this Week Means for Workers, WASH. POST (May 24, 2018),
https://www.washingtonpost.com/news/on-leadership/wp/2018/05/24/a-nail-in-the-coffin-what-the-supreme-
courts-decision-this-week-means-for-workers/ [https://perma.cc/XW9D-BJ7C].
431
Kate Conger & Daisuke Wakabayashi, Google Overhauls Sexual Misconduct Policy After Employee Walkout,
N.Y. TIMES (Nov. 8, 2018), https://www.nytimes.com/2018/11/08/technology/google-arbitration-sexual-
harassment.html [https://perma.cc/5PUT-EA5M]; see also Daisuke Wakabayashi et al., Google Walkout:
Employees Stage Protest over Handling of Sexual Harassment, N.Y. TIMES (Nov. 1, 2018),
https://www.nytimes.com/2018/11/01/technology/google-walkout-sexual-harassment.html
[https://perma.cc/HES8-S69M].
432
The organizers used Google’s own collaborative tools to help organize the walkout. “Their demands reflect
the comments and suggestions of more than 1,000 people who participated in internal conversations about
the walkout. They include points of view of that have long been marginalized in tech . . . .” Farhad Manjoo,
Why the Google Walkout Was a Watershed Moment in Tech, N.Y. TIMES (Nov. 7, 2018),
https://www.nytimes.com/2018/11/07/technology/google-walkout-watershed-tech.html.
[https://perma.cc/2G89-8TGC].
433
BLOOMBERG, Google Ends Forced Arbitration After Employee Walkout, FORTUNE (Nov. 8, 2018, 12:49 PM),
http://fortune.com/2018/11/08/google-sexual-harassment-policywalkout/ [https://perma.cc/5Y4G-5P9Q].

66
The elimination of mandatory arbitration for sexual misconduct claims became a lightning rod for action in the
wake of the #MeToo Movement. 434 In December 2017, Microsoft ended its practice of mandatory arbitration for
sexual harassment claims. 435Uber and Lyft, both unicorns (private companies valued at $1 billion or more), 436 did
the same in May 2018.437

Ultimately, it was not the boards of high technology companies that took the initiative to address sexual harassment,
but rather it was the employees who prodded the companies to action. 438 In the case of Uber, Susan Fowler, a

The group did not get an employee representative on the board. Instead, the chief diversity officer provides
board recommendations to the Leadership Development and Compensation Committee. Id
434
See Kate Gibson, Tech Signals End of Forced Arbitration for Sexual Misconduct Claims, CBS
NEWS:MONEYWATCH (Nov. 16, 2018, 7:29 AM), https://www.cbsnews.com/news/tech-signals-end-of-forced-
arbitration-for-sexual-misconduct-claims/ [https://perma.cc/4PPF-389U].
435
Nick Wingfield & Jessica Silver-Greenberg, Microsoft Moves to End Secrecy in Sexual Harassment Claims, N.Y.
TIMES (Dec. 19, 2017), https://www.nytimes.com/2017/12/19/technology/microsoft-sexual-harassment-
arbitration.html [https://perma.cc/87U5-GFG8]. Microsoft also supported a proposed federal law that would
eliminate such agreements. Id.; see Restoring Justice for Workers Act, H.R. 7109, 115th Cong. (2018).
Professor Alexander J.S. Colvin, who specializes in industrial and labor relations at Cornell University,
analyzed 3,945 employment cases decided by arbitrators from one of the largest arbitration firms in the
United States and found that in cases where companies had only one case before an arbitrator, plaintiffs
prevailed in 31% of arbitration cases; the rate of plaintiff success was significantly lower when companies
had multiple cases before the same arbitrator. Alexander J. S. Colvin, An Empirical Study of Employment
Arbitration: Case Outcomes and Processes, 8 J. EMPIRICAL LEG. STUDIES 1, 1, 19 (2011); see also
Alexander J. S. Colvin & Mark D. Gough, Individual Employment Rights Arbitration in the United States, 68
INDUS. &LAB. REL. REV. 1019 (2015) (finding arbitration win rates for employers are positively correlated
with employer size, repeated use of the same arbitrator, female arbitrators, and arbitrators with more
professional arbitration experience); Alexander J. S. Colvin, Mandatory Arbitration and Inequality of Justice in
Employment, 35 BERKELEY J. EMP. & LAB. L. 71 (2014) (examining the operation of mandatory arbitration
agreements with respect to employees’ access to legal recourse).
436
Jennifer S. Fan, Regulating Unicorns: Disclosure and the New Private Economy, 57 B.C. L. REV. 583, 584
(2016).
437
Uber eliminated mandatory arbitration agreements for employees, riders, and drivers who make harassment
or sexual assault claims against it. Furthermore, Uber committed to a safety transparency report for rides,
deliveries, as well as incidents before pickup and after drop-off; it is collaborating with eighty women’s
groups to develop the incident reporting system that will generate data for the report. Daisuke Wakabayashi,
Uber Eliminates Forced Arbitration for Sexual Misconduct Claims, N.Y. TIMES (May 15, 2018),
https://www.nytimes.com/2018/05/15/technology/uber-sex-misconduct.html. [https://perma.cc/WJ9X-SPJP].
A few hours after Uber’s announcement that it would no longer require mandatory arbitration agreements,
Uber’s rival, Lyft, also announced that it would waive mandatory arbitration agreements for sexual
misconduct claims against Lyft. Like Uber, Lyft waived the confidentiality requirements for those who settled
such claims with it. Sara Ashley O’Brien, Lyft Joins Uber to End Forced Arbitration for Sexual Assault Victims,
CNN BUS. (May 15, 2018, 3:03 PM), https://money.cnn.com/2018/05/15/technology/lyft-forced-
arbitration/index.html [https://perma.cc/XSY3-2S84].
438
See, e.g., Johana Bhuiyan, With Just Her Words, Susan Fowler Brought Uber to Its Knees, RECODE (Dec. 6,
2017, 5:16 PM), https://www.recode.net/2017/12/6/16680602/susan-fowler-uber-engineer-recode-100-
diversity-sexual-harassment [https://perma.cc/89KUNUU5] (Uber employee first shed light on rampant
culture of sexual harassment at Uber, eventually leading to a report by former U.S. Attorney General Eric
Holder and ouster of then-CEO Travis Kalanick). Microsoft’s announcement that it was ending its policy of
having harassment victims’ claims heard in arbitration came five days after Bloomberg reported on Microsoft’s
mishandling of an intern’s rape case. Susan Antilla, Google and Facebook Ended Mandatory Arbitration for
Sexual Harassment Claims. Will Workers Outside the Tech Industry Benefit?, THE INTERCEPT (Nov. 21, 2018,
9:59 AM), https://theintercept.com/2018/11/21/google-sexual-harassment-arbitration/
[https://perma.cc/Z7RD-MLZR]; Karen Weise et al., Microsoft Intern’s Rape Claim Highlights Struggle to
Combat Sex Discrimination, BLOOMBERG (Dec. 14, 2017, 9:58 AM),
https://www.bloomberg.com/news/articles/2017-12-14/microsoft-intern-s-rape-claim-highlights-struggle-to-
combat-sex-discrimination [https://perma.cc/2CJT-JR9E].

67
former employee of the company, brought attention to the culture of rampant sexual misconduct at the company
when she wrote a blog post that went viral. 439 In the blog post, she dispassionately but effectively discussed the
difficult work environment she faced. Coupled with other issues at Uber and in the wake of the #MeToo Movement,
Uber ultimately made the decision to discontinue its customary legal practice of mandatory arbitration provisions
in the context of sexual harassment allegations.

At Google, alleged sexual misconduct allegations against prominent leaders of the company culminated in the
Google Walkout, which was described as “an unprecedented event in the tech industry, where workers historically
refrain from protesting against their employers—let alone in such a visceral and public display.”440

In fact, it is not only in the arena of sexual harassment that employees are forcing their companies to act. As high
technology companies work in areas that increasingly have moral and ethical implications, such as the use of
technology for military drones among other areas, employees who have become concerned with either the
directions or current practices of their companies are taking action.

Employee-initiated activism, shows a broad spectrum of activity, ranging from letter writing to shareholder
proposals to walkouts to resignations.441 It may even serve as a playbook for other high technology companies.

Professor Paul Saffo of Stanford University noted, “[t]his is a watershed moment . . . It’s not going to calm down.
If anything, it’s going to get more intense.” 442Although a causal link cannot be proved between the walkout and
Google’s decision to eliminate its mandatory arbitration provisions for sexual misconduct, there is a correlation
between the growing market power of highly skilled technology employees and the rate at which corporate policies
align with such employees’ values.

These actions by Google reverberated throughout the technology industry. 443 Facebook followed suit the day after,
eliminating its mandatory arbitration provisions. 444Square, Airbnb, and eBay soon added their names to the list of
companies that took similar action. 445

439
Susan Fowler, Reflecting on One Very, Very Strange Year at Uber, SUSAN FOWLER BLOG (Feb. 19, 2017),
https://www.susanjfowler.com/blog/2017/2/19/reflecting-on-onevery-strange-year-at-uber
[https://perma.cc/X44B-D5LZ].
440
Richard Nieva, Google Workers Found Voice in Protest this Year. There’ll Likely Be More of That, CNET (Dec.
23, 2018, 5:00 AM), https://www.cnet.com/news/googleworkers-found-voice-in-protest-this-year-there-will-
likely-be-more/ [https://perma.cc/MT8H-BSDE].
441
Fan, Jennifer S., Employees as Regulators: The New Private Ordering in High Technology Companies
(December 1, 2019). Utah Law Review, Vol. 2019, No. 5, Pp. 973-1076, 2019, Available at SSRN:
https://ssrn.com/abstract=3520230
442
Id.
443
Kate Clark, Airbnb Ends Forced Arbitration Days After Google, Facebook Did the Same, TECHCRUNCH (Nov.
12, 2018, 2:49 PM), https://techcrunch.com/2018/11/12/airbnbends-forced-arbitration-days-after-google-
facebook-did-the-same/ [https://perma.cc/3LW3-254A]. Note that in the United States, the majority of low-
wage workers are women; however, the changes that tech workers advocated for did not reach low- to
middle-income workers. Celine McNicholas, Ending Individual Mandatory Arbitration Alone Fails Most
Workers: For Real Worker Power, End the Ban on Class and Collective Action Lawsuits, ECON. POL’Y INST.
WORKING ECON. BLOG (May 16, 2018, 2:11 PM), https://www.epi.org/blog/ending-individual-mandatory-
arbitration-alone-fails-most-workers-for-real-workerpower-end-the-ban-on-class-and-collective-action-
lawsuits/ [https://perma.cc/2QH5-ZGWL].
444
Douglas MacMillan, Facebook to End Forced Arbitration for Sexual-Harassment Claims, WALL ST. J. (Nov. 9,
2018, 4:35 PM), https://www.wsj.com/articles/facebook-toend-forced-arbitration-for-sexual-harassment-
claims-1541799129 [https://perma.cc/HA8BNP6Z].
445
Davey Alba & Caroline O’Donovan, Square, Airbnb, and eBay Just Said They Would End Forced Arbitration for
Sexual Harassment Claims, BUZZFEED NEWS (Nov. 15, 2018, 5:51 PM),
https://www.buzzfeednews.com/article/daveyalba/tech-companies-endforced-arbitration-airbnb-ebay
[https://perma.cc/6AYY-XMAF]. Square is a payment processing company. See SQUARE,

68
Employees have also been instrumental in extending the battle against mandatory arbitration provisions to
discrimination claims.446 Under pressure from employees, Google announced in February 2019 that it was ending
all mandatory arbitration for cases of harassment as well as discrimination, effective March 21, 2019. 447

https://squareup.com/ [https://perma.cc/QL7M-8TKD] (last visited Jan. 17, 2019). Airbnb is a platform


company that allows people to rent out their properties or spare rooms. See AIRBNB,
https://www.airbnb.com [https://perma.cc/H2VY6YE4] (last visited Jan. 17, 2019). eBay is an online
marketplace. See EBAY, https://www.ebay.com/ [https://perma.cc/59EL-35MK] (last visited Jan. 17, 2019).
Airbnb also stated that the company would also “not require our employees to use arbitration in cases
involving discrimination in the workplace . . . .” Gibson, supra. Notably, other than Square, all of the
companies mentioned in this section are consumer-facing (rather than business-to-business). This may be
entirely a coincidence or perhaps reputational concerns raised by these employee movements where the
company is more consumer-facing amplifies the negotiating power of the employees. Not all companies
responded similarly. Slack stated that it was “‘undertaking a careful review’ of its policies” but “did not
commit to stop require[ed] arbitration for sexual harassment claims”; Netflix and Tesla declined to comment;
and Snap, Spotify, and Palantir did not respond to survey reporters’ inquiries regarding mandatory arbitration
agreements. Alba & O’Donovan, supra. Slack is a cloud-based team collaboration company. SLACK,
https://slack.com/ [https://perma.cc/QXW6-GZJH] Netflix is a technology-driven media services company.
See NETFLIX, https://www.netflix.com/ [https://perma.cc/3D6H-7QSP] Tesla is an automotive, renewable
energy, and power storage company. See TESLA, https://www.tesla.com/ [https://perma.cc/ZX2H-THBD]
Snap is a social media platform and camera company. See SNAP, https://www.snap.com/en-
US/[https://perma.cc/2TMC-MLRN] Spotify is a music and podcast streaming platform. See SPOTIFY,
https://www.spotify.com/us/ [https://perma.cc/4P7J744G] Palantir is a software and data analytics company.
See PALANTIR, https://www.palantir.com/ [https://perma.cc/F6YU-P2M5] Apple issued a statement that it
had ended its arbitration requirement earlier in 2018;Pinterest, Reddit, Twitter, Salesforce, Amazon, Intel,
IBM, and Oath (parent company of Yahoo, Tumblr, AOL, and HuffPost) said they had never required
arbitration for harassment claims. Alba & O’Donovan, supra note 209. Apple is a consumer electronic device
and technology company. See APPLE, https://www.apple.com/[https://perma.cc/W47S-K6WL] Pinterest is a
social media and imagine-based web surfing company. See PINTEREST,
https://help.pinterest.com/en/guide/all-about-pinterest [https://perma.cc/CM4A-3VSN] (last visited Dec. 3,
2019). Reddit is a user-driven news and discussion website. See REDDIT, https://www.reddit.com/
[https://perma.cc/HUR9-N8K2] Twitter is a user-driven news and social networking service. See TWITTER,
https://twitter.com/ [https://perma.cc/UJ5P-8MBM] Salesforce is a cloud-based software and enterprise
customer relation management company. See SALESFORCE, https://www.salesforce.com/
[https://perma.cc/E566-3NVC] Amazon is an e-commerce, cloud computing, and artificial intelligence
company. See AMAZON, https://www.amazon.com/ [https://perma.cc/7E92-LBWV] . Intel is a semiconductor
and precision computing device company. See INTEL,
https://www.intel.com/content/www/us/en/homepage.html [https://perma.cc/X6 QC-V9BZ] IBM is an
information technology company. See IBM, https://www.ibm.com/us-en/?lnk=m [https://perma.cc/X2XS-
4UEH] Oath (renamed “Verizon Media” in January 2019) is a subsidiary of Verizon Communications and an
umbrella company to various digital news and social media platforms. See OATH, https://www.oath.com/
[https://perma.cc/E925-LHLS]
446
See Michelle Cheng, Google Workers Launch Social Media Campaign to Pressure Employers to Drop Forced
Arbitration, INC. (Feb. 3, 2019), https://www.inc.com/michellecheng/google-employees-social-media-
campaign-protest-forced-arbitration.html [https://perma.cc/6K9X-TUJK] (“The group called on Google and
other tech companies—including Netflix, Uber, and Adecco—to change their policies in three ways: make
arbitration optional for all types of disputes, not just for employees but also for contractors and vendors; end
all class-action waivers that prohibit employees from filing lawsuits together; and eliminate gag rules on
arbitration policies.”).
447
See David McCabe, Under Pressure, Google to End Mandatory Arbitration for Employees, AXIOS (Feb. 21,
2019), https://www.axios.com/google-ends-forced-arbitration1550776687-85b148b6-1469-4c1c-b76e-
de774b248e40.html [https://perma.cc/68W6-M48Q]; Sara Ashley O’Brien, Google Eliminating Controversial
Forced Arbitration

69
Google joined Airbnb and Microsoft as one of the few high technology companies that have eliminated forced
arbitration for discrimination cases as well as those involving sexual misconduct. 448

Google’s alleged gender disparities in pay came to the attention of the U.S. Department of Labor. In a routine audit
of Google to check if the company complied with nondiscrimination and affirmative action statutes, Google turned
over a “snapshot” of employment data for approximately 21,000 workers at its Mountain View, California
headquarters. The U.S. Department of Labor found “systemic compensation disparities against women pretty much
across the entire workforce.” 449In 2017, Google claimed that the salary records the U.S. Department of Labor had
requested in connection with the government’s discrimination case were “too financially burdensome and
logistically challenging to compile and hand over.” 450

Google stated that it had spent $270,000 to correct “minor pay discrepancies.” 451 However, eleven percent of
Google employees were left out of the analysis. 452 The company was also continuously dogged by claims of unequal
pay related to gender. Four former Google employees, who had various roles in the company, filed a lawsuit alleging
gender-based pay disparities.453

Government Contracts on Immigration and AI Ethics

On March 6, 2018, Salesforce.com, Inc. (“Salesforce”) announced that its cloud computing and analytics platform
was selected by the U.S. Customs and Border Protection (“CBP”), “the largest federal law enforcement agency of

Practice, CNN (Feb. 21, 2019, 8:10 PM), https://www.cnn.com/2019/02/21/tech/googlemandatory-


arbitration/index.html [https://perma.cc/7HJG-3TY3]. Eighty Google employees signed and published a public
letter calling for Google to end
all mandatory agreements. See End Forced Arbitration, 2019 Must Be the Year to End Forced Arbitration,
MEDIUM (Dec. 10, 2018), https://medium.com/@endforcedarbitration/2019-must-be-the-year-to-end-forced-
arbitration-f4f6833abef7 [https://perma.cc/7EKU-QGZY]
448
See Melanie Ehrenkranz, After Google’s Historic Walkout, One of Tech’s Big Problems Is Still Being Ignored,
GIZMODO (Nov. 21, 2018, 1:12 PM),https://gizmodo.com/after-googles-historic-walkout-one-of-techs-big-
proble-1830475605 [https://perma.cc/SP6K-77XT].
449
Nitasha Tiku, Google Deliberately Confuses Its Employees, Fed Says, WIRED (July 25, 2017, 3:21 PM),
https://www.wired.com/story/google-department-of-labor-gender-paylawsuit/ [https://perma.cc/E5W4-
X333].
450
Sam Levin, Accused of Underpaying Women, Google Says it’s Too Expensive to Get Wage Data, GUARDIAN
(May 26, 2017, 5:49 PM), https://www.theguardian.com/technology/2017/may/26/google-gender-
discrimination-case-salary-records [https://perma.cc/VGW5-RUEN]. “As a federal contractor, Google is
required to comply with equal opportunity laws and allow investigators to review records.” Id.; see also
Complaint for Denial of Access to Records, at 2–4, Office of Fed. Contract Compliance Programs v. Google,
OFCCP No. R00197955 (Jan. 4, 2017)
451
Eva Short, Google Claims to Have Closed Its Gender Pay Gap, but There’s a Twist, SILICON REPUBLIC (Mar.
16, 2018), https://www.siliconrepublic.com/careers/googlegender-pay-gap [https://perma.cc/4VBD-JBCA].
452
The individuals in this eleven percent may be some of the most highly compensated individuals in the
company making them statistically relevant according to some. Id.
453
First Amended Class Action Complaint, Ellis v. Google, No. CGC 17561299, 2018 WL 1858814 (Cal. Super. Ct.
Jan. 3, 2018); see also Sara Ashley O’Brien, Google Hit with Revised Gender Pay Lawsuit, CNN (Jan. 3, 2018,
7:57 PM), https://money.cnn.com/2018/01/03/technology/google-gender-pay-lawsuit-revised/index.html
[perma.cc/H6ET-M9QD].

70
the U.S. Department of Homeland Security”, “to modernize its recruiting process, from hire to retire, and manage
border activities and digital engagement with citizens.” 454

Following this selection, 650 Salesforce employees signed a letter criticizing the contract. 455 Despite the vocal
dissent of some employees, Marc Benioff, Chief Executive Officer of Salesforce, argued that while he was personally
opposed to the policy of separating children from their families at the border, Salesforce products were not directly
involved in such familial separations.456 Protests followed.457

Possibly in response to the poor reception it received in the wake of its partnership with CBP, Salesforce created
the first-ever Office of Ethical and Humane Use of Technology to help address ethical issues that originate from
new technological developments.458

At Microsoft, a similar scenario played out. On June 19, 2018 over 100 employees signed an open letter addressed
to CEO Satya Nadella, which was posted on Microsoft’s internal message board. 459The employees were protesting

454
U.S. Customs and Border Protection Agency Selects Salesforce as Digital Modernization Platform,
SALESFORCE (Mar. 6, 2018), https://investor.salesforce.com/pressreleases/press-release-details/2018/US-
Customs-and-Border-Protection-Agency-SelectsSalesforce-as-Digital-Modernization-Platform/default.aspx
[https://perma.cc/J8W7-BPVZ]
455
Patrick Chu, Salesforce Ohana Asks Marc Benioff to Cancel Contract with the Border Patrol, SAN FRAN. BUS.
TIMES (June 26, 2018, 10:35 AM), https://www.bizjournals.com/sanfrancisco/news/2018/06/26/salesforce-
ohana-asks-benioff-to-nix-fed-contract.html [https://perma.cc/7WPE-NVKT] (“Given the inhumane separation
of children from their parents currently taking place at the border, we believe that our core value of Equality
is at stake and that Salesforce should reexamine our contractual relationship with CBP and speak out against
its practices[.]”). Some students at Stanford University also signed a petition requesting that Salesforce drop
its contract with CBP. If they did not terminate said contract, the students threatened to not interview for
jobs at Salesforce. Sean Captain, Stanford Students Are Vowing Not to Work at Salesforce over Its Border
Patrol Deal, FAST COMPANY (Nov. 14, 2018), https://www.fastcompany.com/90267905/stanford-students-
are-vowingnot-to-work-at-salesforce-over-its-border-patrol-deal [https://perma.cc/AGP5-PVDY]. A Texas
nonprofit, Refugee and Immigrant Center for Education and Legal Services, turned down a $250,000
donation from Salesforce in light of its CBP contract. Laura Sydell, Immigrant Rights Group Turns Down
$250,000 from Tech Firm over Ties to Border Patrol, NPR (July 19, 2018, 12:00
PM),https://www.npr.org/2018/07/19/630358800/immigrantrights-group-turns-down-250-000-from-tech-
firm-over-ties-to-border-pat [https://perma.cc/42XZ-7Y4C].
456
Tom McKay, Salesforce CEO Says It Won’t Sever Ties with Custom and Border Protection, GIZMODO (June
28, 2018, 1:00 AM), https://gizmodo.com/salesforce-ceo-saysthey-wont-sever-ties-with-customs-a-
1827195457 [https://perma.cc/6E58-RFJR].
457
Katie Canales, Activists Marched Outside of the Salesforce Headquarters in San Francisco to Protest the
Company’s Contract with U.S. Customs and Border Protection, BUS. INSIDER (July 9, 2018, 6:02 PM),
https://www.businessinsider.com/salesforce-protestcontract-customs-border-protection-san-francisco-2018-7
[https://perma.cc/PP76-Z8BR] (reporting that tech workers and community activists protested Salesforce’s
contract with CBP).
458
Minda Zetlin, Salesforce Employees Objected to Its Immigration Work. CEO Marc Benioff’s Response Was
Brilliant, INC. (Nov. 26, 2018), https://www.inc.com/mindazetlin/salesforce-ethical-humane-office-marc-
benioff-kara-swisher-employee-activism.html [https://perma.cc/7G45-JET9].
459
Sheera Frenkel, Microsoft Employees Protest Company’s Work with ICE, SEATTLE TIMES (June 19, 2018, 5:21
PM), https://www.seattletimes.com/business/microsoftemployees-protest-companys-work-with-ice/
[https://perma.cc/9KJR-P7JV]. Eventually, over 300 employees signed the letter. Colin Lecher, The Employee
Letter Denouncing Microsoft’s ICE Contract Now Has over 300 Signatures, VERGE (June 21, 2018, 1:18 PM),
https://www.theverge.com/2018/6/21/17488328/microsoft-ice-employees-signaturesprotest
[https://perma.cc/6QRK-2UBT].

71
the company’s $19.4 million contract with ICE because the agency had been separating migrant parents from their
children at the U.S.-Mexico border.460

The letter stated:

“We believe that Microsoft must take an ethical stand, and put children and families above profits.” 461Employees
questioned how working with ICE could comport with the company’s ethical stances. 462 Microsoft responded,
“Microsoft is dismayed by the forcible separation of children from their families at the border . . . We urge the
administration to change its policy and Congress to pass legislation ensuring children are no longer separated from
their families.” 463In an internal memo to employees, Mr. Nadella stated, “Microsoft is not working with the U.S.
government on any projects related to separating children from their families at the border. Our current cloud
engagement with U.S. ICE is supporting legacy mail, calendar, messaging and document management
workloads.”464

Microsoft’s relationship with ICE is ongoing. In the cases of both Salesforce and Microsoft, employee-initiated
activism in the form of written advocacy did not have the desired effect. In these particular instances, there was
an ethical component to employees’ concerns. However, the management of each company ultimately decided to
keep the contracts.

The Role of Public Company Reporting Obligations

Some companies already acknowledge the impact of social dynamics in their “Risk Factors” or “Our Business”
sections. For example, Alphabet disclosed the following in its Form 10-K for the year ended December 31, 2018:
“We are subject to increasing regulatory scrutiny as well as changes in public policies governing a wide range of
topics that may negatively affect our business. Changes in social, political, and regulatory conditions or in laws and
policies governing a wide range of topics may cause us to change our business practices. Further, our expansion
into a variety of new fields also raises a number of new regulatory issues. These factors could negatively affect our
business and results of operations in material ways.”465

The “Risk Factors” sections in quarterly (Form 10-Q) 466and annual (Form 10- K)467 reports may provide one avenue
to more specifically address the potential impact of employee-initiated private ordering. 468 Under Item 105 of

460
Sheera Frenkel, Microsoft Employees Protest Company’s Work with ICE, SEATTLE TIMES (June 19, 2018, 5:21
PM), https://www.seattletimes.com/business/microsoftemployees-protest-companys-work-with-ice/
[https://perma.cc/9KJR-P7JV].
461
Id.
462
See id. (stating that some employees called for Microsoft to not only cancel its contract with ICE but also
refuse to work with those “who violate international human rights law”).
463
Dina Bass & Mark Bergen, Microsoft Opposes ICE Policy on Migrant Children, SEATTLE TIMES (June 18, 2018,
3:44 PM), https://www.seattletimes.com/business/microsoft -opposes-ice-policy-on-migrant-children/
[https://perma.cc/MK48-NBCN].
464
Tom Warren, Microsoft CEO Plays Down ICE Contract in Internal Memo to Employees, VERGE (June 20, 2018,
4:49 AM), https://www.theverge.com/2018/6/20/17482500/microsoft-ceo-satya-nadella-ice-contract-memo
[https://perma.cc/4Y2M-BPBV]; Mr. Nadella went on to state Microsoft has a long history of taking a
principled approach to how we live up to our mission of empowering every person and every organization on
the planet to achieve more with technology platforms and tools, while also standing up for our enduring
values and ethics. . . . Any engagement with any government has been and will be guided by our ethics and
principles. We will continue to have this dialogue both within our company and with our stakeholders outside.
465
Alphabet Form 10-K, supra, at 7–8.
466
See U.S. SEC. & EXCH. COMM’N, FORM 10-Q: GENERAL INSTRUCTIONS (2018),
https://www.sec.gov/about/forms/form10-q.pdf [https://perma.cc/D6T2-LZCB].
467
See U.S. SEC. & EXCH. COMM’N, FORM 10-K: GENERAL INSTRUCTIONS (2018),
https://www.sec.gov/about/forms/form10-k.pdf [https://perma.cc/J6UK-LMFV].
468
See, e.g., Steven L. Schwarcz, Private Ordering, 97 NW. U. L. REV. 319, 319 (2002) (defining commercial
private ordering as the “sharing of regulatory authority with private actors”).

72
Regulation S-K,469 high technology public companies could describe the impact employee-initiated private ordering
has on their respective companies.

The Human Capital Management Coalition, 470 which is comprised of 25 institutional investors with more than $2.8
trillion in assets under management, requested the U.S. Securities and Exchange Commission (“SEC”) adopt rules
requiring “issuers to disclose information about their human capital management policies, practices and
performance”471 in a petition for rulemaking in July 2017.

This then led to recommendations from the SEC’s Investor Advisory Committee in March 2019, stating:

As the U[.]S[.] transitions from being an economy based almost entirely on industrial production to one that is
becoming increasingly based on technology and services, it becomes more and more relevant for our corporate
disclosure system to evolve to include disclosure regarding intangible assets, such as intellectual property and
human capital. Human capital is increasingly conceptualized as an investable asset. Modernizing the [SEC’s]
framework for corporate reporting generally should reflect these facts, subject to the standard of materiality. 472

The Investor Advisory Committee contrasts the financial market’s view of human capital to the SEC’s: the former
sees it as a source of value and the latter, as a cost. 473 Furthermore, the “available information [about human
capital] is not consistent, verified, or comparable across companies. Differences in [human capital management]
make existing disclosure requirements, such as the 10-K requirement to disclose the number of employees, difficult
for investors to interpret or use.”474

SEC Chairman Jay Clayton outlined a set of principles to guide disclosure requirements and disclosure guidance:
“(1) materiality; (2) comparability; (3) flexibility; (4) efficiency; and (5) responsibility.”475 Clayton stated his “belie[f]
that our disclosure requirements and guidance must evolve over time to reflect changes in markets and industry
while being true to these principles, which in well-designed rules can be mutually reinforcing.” 476 In particular,
Clayton pointed to current human capital disclosure requirements under Items 101 and 102 of Regulation S-K:

469
Item 503(c) of Regulation S-K reads in part: (c) Risk factors. Where appropriate, provide under the caption
“Risk Factors” a discussion of the most significant factors that make the offering speculative or risky. This
discussion must be concise and organized logically. Do not present risks that could apply to any issuer or any
offering. Explain how the risk affects the issuer or the securities being offered. Set forth each risk factor
under a subcaption that adequately describes the risk. 17 C.F.R. § 229.503(c) (2019).
470
“The Human Capital Management Coalition (HCMC or Coalition) is a cooperative effort among a diverse group
of asset owners to further elevate human capital management as a critical component in company
performance. The Coalition engages companies and other market participants with the aim of understanding and
improving how human capital management contributes to the creation of long-term shareholder value. The
HCMC is co-chaired by the UAW Retiree Medical Benefits Trust and the California State Teachers' Retirement
System. The Coalition includes 35 institutional investors representing over $6.6 trillion in assets.”
https://www.hcmcoalition.org/about
471
MEREDITH MILLER, HUMAN CAPITAL MANAGEMENT COALITION, RULEMAKING PETITION TO REQUIRE
ISSUERS TO DISCLOSE INFORMATION ABOUT THEIR HUMAN CAPITAL MANAGEMENT POLICIES, PRACTICES
AND PERFORMANCE 1 (July 6, 2017), https://www.sec.gov/rules/petitions/2017/petn4-711.pdf
[https://perma.cc/5Y9R-YR28].
472
See HUMAN CAPITAL MANAGEMENT DISCLOSURE, RECOMMENDATION OF THE INVESTOR ADVISORY
COMMITTEE 1 (Mar. 28, 2019), https://www.sec.gov/spotlight/investor -advisory-committee-2012/human-
capital-disclosure-recommendation.pdf [https://perma.cc/7456-FSV5]
473
Id. at 2.
474
Id.
475
Id.
476
Jay Clayton, Remarks to the SEC Investor Advisory Committee, U.S. SEC. & EXCH.COMM’N (Mar. 28, 2019),
https://www.sec.gov/news/public-statement/clayton-remarksinvestor-advisory-committee-032819
[https://perma.cc/WM6G-YJ9N].

73
they “date back to a time when companies relied significantly on plant, property and equipment to drive value.
Today, human capital and intellectual property often represent an essential resource and driver of performance for
many companies.” 477

In addition, Clayton stated, “[t]he strength of our economy and many of our public companies is due, in significant
and increasing part, to human capital, and for some of those companies human capital is a mission-critical asset.”
One way to address such disclosure, as Clayton suggested, is to require a breakdown of a company’s workforce,
including how this breakdown implicates the company’s cost and value. Information related to key performance
indicators, such as turnover, internal hire and promotion rates, diversity data, and the like could be added to the
disclosures in the business section of SEC filings.

Jennifer Fan recommends that this section could also include a summary of material elements of important
company policies and a more robust statement on the competitive conditions in a company’s area of business. 478
Intel Corporation (“Intel”), a semiconductor company, provides good examples of what types of disclosures to
make and how to organize such information. 479

Corporate Social Responsibility Reports

Although not required by law, some companies publish yearly corporate social responsibility reports. For example,
Intel releases such reports annually. 480Its most recent report covers the period from 2020 to 2021 and includes
information on environmental sustainability, supply chain responsibility (to ensure responsible labor systems are in
place), diversity and inclusion, and social impact (diverse workforce and suppliers). Microsoft does something
similar. Microsoft’s corporate social responsibility report covers the amount of money and time spent on educating
people on coding and other skills, the amount of money donated and the number of nonprofits served, and the
company’s environmental impact. 481 Both Intel and Microsoft also provide information based on the Global
Reporting Initiative (“GRI”) Sustainability Reporting Standards. GRI is an independent international organization
that has pioneered sustainability reporting. 482 According to GRI, “[r]eporting with the GRI Standards supports
companies, public and private, large and small, [to] protect the environment and improve society, while at the
same time thriving economically by improving governance and stakeholder relations, enhancing reputations and
building trust.”483

When discussing human capital, Professor Fan suggests disclosing the potential impact of employee-initiated
private ordering on company policies and business and legal practices. This type of disclosure may prove important
for future studies, as it may illustrate the breadth and depth of employee-initiated private ordering as a general
matter in the case of high technology companies, the government has generally not imposed laws that restrict
newer technologies such as artificial intelligence and augmented reality. As a result, companies have navigated an

477
Jay Clayton, Remarks for Telephone Call with SEC Investor Advisory Committee Members, U.S. SEC. & EXCH.
COMM’N (Feb. 6, 2019), https://www.sec.gov/news/publicstatement/clayton-remarks-investor-advisory-
committee-call-020619 [https://perma.cc/HR2T-9BXK].
478
Jennifer Fan supra
479
Intel Corp., Annual Report (Form 10-K) 8–18 (Feb. 1, 2019),
https://www.sec.gov/Archives/edgar/data/50863/000005086319000007/a12292018q4-
10kdocument.htm#s243ba567089a4889a02993ecdceef5c8 [https://perma.cc/UP9H-W233].
480
See, e.g., INTEL, 2020-21 CORPORATE RESPONSIBILITY REPORT
(2020),http://csrreportbuilder.intel.com/pdfbuilder/pdfs/CSR-2020-21-Full-Report.pdf
481
See MICROSOFT, CORPORATE SOCIAL RESPONSIBILITY REPORT (2020) https://www.microsoft.com/en-
us/corporate-responsibility/report
482
About GRI, GLOBAL REPORTING INITIATIVE (2017), https://www.globalreporting.org/information/about-
gri/Pages/default.aspx [https://perma.cc/86EB-CGZG] (“GRI helps businesses and governments worldwide
understand and communicate their impact on critical sustainability issues such as climate change, human
rights, governance and social wellbeing.”).
483
Id.

74
environment with few laws and adopted their own rules with little oversight, which at times has had far-reaching
implications.

For example, due to major missteps in how content was regulated on Facebook, there is credible evidence that
Russia influenced the outcome of the 2016 U.S. presidential election. 484Cambridge Analytica, a political consulting
firm that uses data to determine voter traits, mined private information from Facebook profiles of more than 50
million users without their permission during the 2016 U.S. presidential election. 485As a result of the scandal,
politicians have called for more regulations—one example is Senator Elizabeth Warren’s plan to break up big tech
by unwinding mergers that allegedly have stifled competition and prohibiting companies from having both a
platform utility and a service using it (e.g., Amazon could have its merchandise distribution platform or its online
store, but not be owners of both). 486

The rise of employee activism coincided with the awareness of the greater population regarding the role of high
technology companies on a variety of issues. These issues range from the rise of artificial intelligence to privacy
breaches to the dissemination of fake news—all of which have potentially immense societal implications. 487In
March 2019, as mentioned above, Senator Elizabeth Warren proposed regulating high technology companies like
Alphabet, Amazon, and Facebook, arguing that their concentrated power has adverse societal implications. 488
Instead of high technology companies being lauded for their innovations, the implications of these innovations
themselves and their potential impacts on society are now being scrutinized to a greater degree. 489 Employees of
high technology companies have begun to employ a number of different ways to both raise awareness and call for
change in response to long-standing problems brought to light, and potentially problematic uses of technology
being developed.

484
See Exposing Russia’s Effort to Sow Discord The Internet Research Agency and Advertisements, U.S. HOUSE
OF REPRESENTATIVES, PERMANENT SELECT COMMITTEE ON INTELLIGENCE,
https://intelligence.house.gov/social-media-content/ [https://perma.cc/MTQ 5-SN4A] (last visited Sep. 12,
2019).
485
Matthew Rosenberg et al., How Trump Consultants Exploited the Facebook Data of Millions, N.Y. TIMES (Mar.
17, 2018), https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign. html
[https://perma.cc/NV5D-MTHV].
486
See Michael Hiltzik, Column: Sen. Warren’s Plan to Break up the Big Tech Companies is Good, but Too
Narrow, L.A. TIMES (Mar. 21, 2019, 6:00 AM), https://www.latimes.com/business/hiltzik/la-fihiltzik-warren-
tech-breakup-20190321-story.html [https://perma.cc/6AK6-6WDG]
487
Can Employees Change the Ethics of Tech Firms, KNOWLEDGE@WHARTON (Nov. 13, 2018),
http://knowledge.wharton.upenn.edu/article/can-tech-employees-change-theethics-of-their-firms/
[https://perma.cc/ZR8N-WN7V] (“‘Skilled developers and engineers have always placed value on aspects of
work beyond monetary compensation, like the skills they can learn, the technologies they use, or the work
environment itself,’ said Prasanna Tambe, Wharton professor of operations, information and decisions.
‘Increasingly—and especially given the political environment—a key part of this consideration for workers has
become the moral and ethical implications of the choices made by their employers, ranging from the
treatment of employees or customers to the ethical implications of the projects on which they work. This is
especially true given the central role of “big tech” in new fears about information, rights, and privacy and the
growing feeling that a lack of oversight in this sector has been harmful.’”) (quoting Prasanna Tambe, Wharton
professor of operations, information and decisions).
488
Troy Wolverton, Elizabeth Warren Pulled a Ninja Move to Turn Tech Angst into a Crackdown with Real Teeth,
and Tech Is Going to Suffer Even If She’s Not President, BUS. INSIDER (Mar. 8, 2019, 6:25 PM),
https://www.businessinsider.com/elizabeth-warren-callto-break-up-amazon-google-is-a-real-threat-2019-3
[https://perma.cc/33X8-8A94].
489
See Kim Hart, David McCabe & Mike Allen, Google CEO: BigTech Scrutiny Is “Here to Stay,” AXIOS (Dec. 12,
2018), https://www.axios.com/google-sundar-pichaiinterview-big-tech-scrutiny-40d655a7-25f2-4414-b8fb-
ac4f65ab62e4.html [https://perma.cc/2UNQ-AEHE] (discussing how technology issues, such as privacy and
artificial intelligence, are driving the “scrutiny and skepticism” affecting technology companies).

75
At their most effective, these activities result in companies implementing legal changes. But when employee-
initiated private ordering fails, and companies do not acquiesce to employee demands, these companies still need
to explain themselves in the court of public opinion.

High technology employees have sought changes in their workplaces. Their efforts did not always result in changes,
but in some cases, these efforts led to a modification of legal norms and adoption of AI Ethics codes. By taking
action, employees can force their companies to be more transparent and ultimately may have the ability to effect
legal changes and implement different legal and/or ethical norms.

These case studies highlight the potential role of employees as a check on companies which, in trying to innovate
quickly, may be operating in legal and/or ethical gray areas.490 Thus employee-initiated actions in high technology
companies clearly influence its normative assessment.

Some recent rules adopted by private and public high technology companies in light of employee pressure, without
governmental action, include eliminating mandatory arbitration agreements for sexual misconduct, retreating from
military-related work, and entering (or choosing not to enter) the lucrative Chinese market due to censorship of
content by the Chinese government. Five areas of particular significance and impact are: artificial intelligence,
augmented reality, censorship, gender issues (related to mandatory arbitration provisions and disparity in salaries
between men and women), and immigration. With the exception of gender issues, these areas generally involve
proposed or existing government related contracts.

Conclusions

If our goal is to have a more comprehensive AI policy, increasing multi disciplinary and diversity in the development
and discourse on AI is critical. Taking advantage of multi disciplinary as well as community expertise goes a long
way to support responsible AI development that address more comprehensive implications of AI.

Industry self governance is unlikely to fully protect the public interest when it comes to powerful, general purpose
technologies. It is encouraging to see that there is significant effort being made from those in government, such
as DOD and the JAIC, as well as from civil society in AI ethics.

Despite the differences we see and shall see between nations’ approaches to AI, there are also numerous synergies.
There are many opportunities for governments and organizations to coordinate internationally. This is likely to be
increasingly important as many of the challenges and opportunities from AI extend well beyond national borders.
AI regulation can be hard for national governments to do on their own. There are certainly issues of national
competitiveness, but failing to partner internationally on AI development, will not serve anyone's interests.

Recent years have seen a significant increase in government attention to AI, as at least 27 national governments
have articulated plans or initiatives for encouraging and managing the development of AI technologies. 491Nations
thus far have adopted highly divergent approaches in their AI policies, and there is significant variation in how they
are preparing for security threats and opportunities. 492

International coordination and cooperation on AI begins with a common understanding of what is at stake and
what outcomes are desired for the future. That shared language now exists in the Organization for Economic Co-
operation and Development (OECD) AI Principles, 493 which are being leveraged to support partnerships, multilateral
agreements, and the global deployment of AI systems.

490
Elizabeth Pollman & Jordan M. Barry, Regulatory Entrepreneurship, 90 S. CAL. L. REV. 383, 390 (2017)
(discussing high technology startups intentionally operating in areas of legal ambiguity and changing existing
law through their business).
491
Jessica Cussins Newman, Toward AI Security: Global Aspirations for a More Resilient Future, Center for Long-
Term Cybersecurity, (Feb. 2019) https://cltc.berkeley.edu/wp-
content/uploads/2019/02/Toward_AI_Security.pdf
492
Id.
493
https://www.oecd.org/going-digital/ai/principles/

76
In May 2019, 42 countries adopted the Organization for Economic Co-operation and Development OECD AI
Principles, 494a legal recommendation that includes five principles and five recommendations related to the use of
AI. To ensure the successful implementation of the Principles, the OECD launched the AI Policy Observatory 495in
February 2020. The Observatory publishes practical guidance about how to implement the AI Principles, and
supports a live database of AI policies and initiatives globally. It also compiles metrics and measurement of global
AI development and uses its convening power to bring together the private sector, governments, academia, and
civil society.

There is much rhetoric about the “race for AI supremacy” between nations. 496 Given the fierce competition among
some nations, it was encouraging that dozens of nations, and especially the U.S., China, and Russia, agreed to a
common set of guiding principles for AI, in June 2019, when the G20 gave unanimous support to the OECD AI
Principles.497

The role of inter governmental initiatives is really valuable in responsible AI to support its development. The OECD
AI recommendation is an encouraging example of international collaboration. The OECD Principles on Artificial
Intelligence promote artificial intelligence that is innovative and trustworthy and that respects human rights and
democratic values. They were adopted in May 2019 by OECD member countries when they approved the OECD
Council Recommendation on Artificial Intelligence. The OECD AI Principles are the first such principles signed up
to by governments.

The OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a
rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk
management, and responsible business conduct.

In June 2019, 498 the G20 adopted human-centered AI Principles that draw from the OECD AI Principles.499 The
White House Office of Science and Technology Policy (OSTP) and the Office of Management and Budget (OMB) 500
Guidance on Regulation of Artificial Intelligence Applications was signed on November 17, 2020. 501 Over 40
countries including the U.S. as well as some non OECD members have signed on to the OECD AI principles. This
is the first intergovernmental AI standard to date.

Thus, international coordination on AI is not only critical but also possible. AI will impact everyone so everyone
should have a say. It is really valuable and important at these relatively early stages of AI governance that we
make the effort to hear from all people, including those who struggle to be heard.

494
https://www.oecd.org/going-digital/ai/principles/
495
https://www.oecd.ai/
496
Stephen Cave and Seán ÓhÉigeartaigh, “An AI Race for Strategic Advantage: Rhetoric and Risks,” AI Ethics
and Society 2018, Volume: 1, 2018,
https://www.researchgate.net/publication/330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_
Risks/citation/download.
497
https://oecd-innovation-blog.com/2020/07/24/g20-artificial-intelligence-ai-principles-oecd-
report/#:~:text=In%202019%2C%20G20%20Leaders%20welcomed,%2C%20transparency%2C%20robust
ness%20and%20accountability.
The White House Office of Science and Technology Policy (OSTP) advises the President and others within the
Executive Office of the President on the scientific, engineering, and technological aspects of the economy,
national security, homeland security, health, foreign relations, and the environment.
https://www.whitehouse.gov/ostp/
499
https://www.mofa.go.jp/files/000486596.pdf
The Office of Management and Budget oversees the implementation of the President’s vision across the
Executive Branch. OMB carries out its mission through five main functions across executive departments and
agencies. https://www.whitehouse.gov/omb/
501
https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf

View publication stats


77

You might also like