Algorithm v. Algorithm
Algorithm v. Algorithm
ABSTRACT
Critics raise alarm bells about governmental use of digital
algorithms, charging that they are too complex, inscrutable, and prone
to bias. A realistic assessment of digital algorithms, though, must
acknowledge that government is already driven by algorithms of
arguably greater complexity and potential for abuse: the algorithms
implicit in human decision-making. The human brain operates
algorithmically through complex neural networks. And when humans
make collective decisions, they operate via algorithms too—those
reflected in legislative, judicial, and administrative processes. Yet these
human algorithms undeniably fail and are far from transparent. On an
individual level, human decision-making suffers from memory
limitations, fatigue, cognitive biases, and racial prejudices, among other
problems. On an organizational level, humans succumb to groupthink
and free riding, along with other collective dysfunctionalities. As a
result, human decisions will in some cases prove far more problematic
than their digital counterparts. Digital algorithms, such as machine
learning, can improve governmental performance by facilitating
outcomes that are more accurate, timely, and consistent. Still, when
deciding whether to deploy digital algorithms to perform tasks
currently completed by humans, public officials should proceed with
TABLE OF CONTENTS
Introduction .......................................................................................... 1282
I. Limitations of Human Algorithms ................................................ 1288
A. Physical Limitations ........................................................... 1289
B. Biases ................................................................................... 1293
C. Group Challenges .............................................................. 1299
II. The Promise of Digital Algorithms .............................................. 1304
A. Digital Algorithms and Their Virtues.............................. 1305
B. Digital Algorithms Versus Human Algorithms.............. 1309
C. Human Errors with Digital Algorithms .......................... 1314
III. Deciding to Deploy Digital Algorithms ...................................... 1318
A. Selecting a Multicriteria Decision Framework ............... 1318
B. Key Criteria in Choosing Digital Algorithms ................. 1322
C. Putting Digital Algorithms in Place ................................. 1333
Conclusion ............................................................................................. 1339
INTRODUCTION
Computerized algorithms increasingly automate tasks that
previously had been performed by humans.1 They now routinely assist
Lehr, Regulating by Robot]; David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars
Should Learn About Machine Learning, 51 U.C. DAVIS L. REV. 653, 669–702 (2017). Machine-
learning algorithms come in many forms and are referred to by a variety of terms. See Cary Coglianese
& David Lehr, Transparency and Algorithmic Governance, 71 ADMIN. L. REV. 1, 2 n.2 (2019)
[hereinafter Coglianese & Lehr, Transparency] (“By ‘artificial intelligence’ and ‘machine learning,’
we refer . . . to a broad approach to predictive analytics captured under various umbrella terms,
including ‘big data analytics,’ ‘deep learning,’ ‘reinforcement learning,’ ‘smart machines,’ ‘neural
networks,’ ‘natural language processing,’ and ‘learning algorithms.’”). The particular type of machine-
learning algorithm deployed for any specific use will no doubt affect its performance in that setting.
For our purposes here, we focus generically and broadly on the class of digital algorithms that today
can drive automated forecasting and decision-making tools with the potential to substitute for or
complement traditional human decision-making within government. For further elaboration of what
we mean by machine-learning algorithms and AI, see infra Part II.A.
2. See Claire Cain Miller, Can an Algorithm Hire Better Than a Human?, N.Y. TIMES (June 25,
2015), https://www.nytimes.com/2015/06/26/upshot/can-an-algorithm-hire-better-than-a-human
.html [https://perma.cc/39MF-8QP7].
3. See Scott Zoldi, How To Build Credit Risk Models Using AI and Machine Learning, FICO
BLOG (Apr. 6, 2017), http://www.fico.com/en/blogs/analytics-optimization/how-to-build-credit-risk-
models-using-ai-and-machine-learning [https://perma.cc/N3UW-XKRB].
4. See Jigar Patel, Sahil Shah, Priyank Thakkar & K. Kotecha, Predicting Stock and Stock
Price Index Movement Using Trend Deterministic Data Preparation and Machine Learning
Techniques, 42 EXPERT SYS. WITH APPLICATIONS 259, 259 (2015).
5. See Using Machine Learning on Computer Engine To Make Product Recommendations,
GOOGLE CLOUD PLATFORM (Feb. 14, 2017), https://cloud.google.com/solutions/recommenda
tions-using-machine-learning-on-compute-engine [https://perma.cc/C4D3-9PCE].
6. See, e.g., PAUL CERRATO & JOHN HALAMKA, THE DIGITAL RECONSTRUCTION OF
HEALTHCARE: TRANSITIONING FROM BRICK AND MORTAR TO VIRTUAL CARE 82–84 (2021);
Alexis C. Madrigal, The Trick That Makes Google’s Self-Driving Cars Work, ATLANTIC (May 15,
2014), http://www.theatlantic.com/technology/archive/2014/05/all-the-world-a-track-the-trick-thatmakes-
googles-self-driving-cars-work/370871 [https://perma.cc/9CWC-HTL6]; Nikhil Dandekar, What
Are Some Uses of Machine Learning in Search Engines?, MEDIUM (Apr. 7, 2016),
https://medium.com/@nikhilbd/what-are-some-uses-of-machine-learning-in-search-engines-5770
f534d46b [https://perma.cc/7DJD-TDPX]; Steffen Herget, Machine Learning and AI: How
Smartphones Get Even Smarter, NEXTPIT (Jan. 24, 2018), https://www.androidpit.com/ machine-
learning-and-ai-on-smartphones [https://perma.cc/XJ3J-CV6L]. For a survey of the state of the
art in AI and its varied applications, see generally MICHAEL L. LITTMAN ET AL., GATHERING
STRENGTH, GATHERING STORMS: THE ONE HUNDRED YEAR STUDY ON ARTIFICIAL
INTELLIGENCE (AI100) 2021 STUDY PANEL REPORT (2021).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
7. See, e.g., Ronen Bergman & Farnaz Fassihi, The Scientist and the A.I.-Assisted, Remote-
Control Killing Machine, N.Y. TIMES (Sept. 18, 2021), https://www.nytimes.com/2021/09/18/
world/middleeast/iran-nuclear-fakhrizadeh-assassination-israel.html [https://perma.cc/5SLN-D5XJ];
Patrick Tucker, Spies Like AI: The Future of Artificial Intelligence for the US Intelligence
Community, DEF. ONE (Jan. 27, 2020), https://www.defenseone.com/technology/2020/01/spies-ai-
future-artificial-intelligence-us-intelligence-community/162673 [https://perma.cc/CFQ4-GNEX];
PAUL SCHARRE, ARMY OF NONE: AUTONOMOUS WEAPONS AND THE FUTURE OF WAR 5–6
(2018); Andrew Tarantola, The Pentagon Is Hunting ISIS Using Big Data and Machine Learning,
ENGADGET (May 15, 2017), https://www.engadget.com/2017/05/15/the-pentagon-is-hunting-isis-
using-big-data-and-machine-learning [https://perma.cc/H5UC-VQV9].
8. For discussion of algorithmic tools in the criminal law context, see generally, for
example, Richard Berk, Lawrence Sherman, Geoffrey Barnes, Ellen Kurtz & Lindsay Ahlman,
Forecasting Murder Within a Population of Probationers and Parolees: A High Stakes Application
of Statistical Learning, 172 J. ROYAL STAT. SOC’Y 191 (2009); Sandra G. Mayson, Bias In, Bias
Out, 128 YALE L.J. 2218 (2019); Cary Coglianese & Lavi M. Ben Dor, AI in Adjudication and
Administration, 86 BROOK. L. REV. 791 (2021); RICHARD A. BERK, ARUN KUMAR
KUCHIBHOTLA & ERIC TCHETGEN TCHETGEN, IMPROVING FAIRNESS IN CRIMINAL JUSTICE
ALGORITHMIC RISK ASSESSMENTS USING OPTIMAL TRANSPORT AND CONFORMAL
PREDICTION SETS (2021), https://arxiv.org/pdf/2111.09211.pdf [https://perma.cc/JN8C-WCHS].
9. See Coglianese & Ben Dor, supra note 8, at 814–27. See generally DAVID FREEMAN
ENGSTROM, DANIEL E. HO, CATHERINE M. SHARKEY & MARIANO-FLORENTINO CUÉLLAR,
GOVERNMENT BY ALGORITHM: ARTIFICIAL INTELLIGENCE IN FEDERAL ADMINISTRATIVE
AGENCIES 22–29 (2020), https://www-cdn.law.stanford.edu/wp-content/uploads/2020/02/ACUS-
AI-Report.pdf [https://perma.cc/TWE9-JLA5] (examining the deployment of AI by federal
agencies).
10. See, e.g., Danielle Keats Citron, Technological Due Process, 85 WASH. U. L. REV. 1249,
1313 (2008); danah boyd & Kate Crawford, Critical Questions for Big Data: Provocations for a
Cultural, Technological, and Scholarly Phenomenon, 15 INFO. COMMC’N & SOC. 662, 673–75
(2012); FRANK PASQUALE, THE BLACK BOX SOCIETY: THE SECRET ALGORITHMS THAT
CONTROL MONEY AND INFORMATION 3 (2015); CATHY O’NEIL, WEAPONS OF MATH
DESTRUCTION: HOW BIG DATA INCREASES INEQUALITY AND THREATENS DEMOCRACY 12–13
(2016); Ryan Calo & Danielle K. Citron, The Automated Administrative State: A Crisis of Legitimacy,
70 EMORY L.J. 797, 799–804 (2021).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
11. See, e.g., Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 CALIF.
L. REV. 671, 680–87 (2016); VIRGINIA EUBANKS, AUTOMATING INEQUALITY: HOW HIGH-TECH
TOOLS PROFILE, POLICE, AND PUNISH THE POOR 10–13 (2017); Kate Crawford, Think Again:
Big Data, FOREIGN POL’Y (May 9, 2013), http://www.foreignpolicy.com/articles/2013/05/09/
think_again_big_data [https://perma.cc/D9M6-3BBA].
12. See, e.g., Jenna Burrell, How the Machine ‘Thinks’: Understanding Opacity in Machine
Learning Algorithms, 3 BIG DATA & SOC’Y 1, 1–2 (2016); 2018 Program, ACM FACCT CONF.
(2018), https://fatconference.org/2018/program.html [https://perma.cc/4AG2-UMWX] (discussing
work on algorithmic explanation).
13. See, e.g., Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for
Automated Predictions, 89 WASH. L. REV. 1, 8 (2014); Margaret Hu, Algorithmic Jim Crow, 86
FORDHAM L. REV. 633, 643–44 (2017); Dorothy E. Roberts, Digitizing the Carceral State, 132
HARV. L. REV. 1695, 1695, 1697 (2019) (reviewing EUBANKS, supra note 11).
14. Karen Yeung, Algorithmic Regulation: A Critical Interrogation, 12 REGUL. &
GOVERNANCE 505, 517 (2018); see also Samuel Gibbs, Elon Musk: Artificial Intelligence Is Our
Biggest Existential Threat, GUARDIAN (Oct. 27, 2014), https://www.theguardian.com/technology/
2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat [https://perma.cc/8C6P-
YZYZ] (“Elon Musk has . . . declar[ed artificial intelligence] the most serious threat to the
survival of the human race.”); Rory Cellan-Jones, Stephen Hawking Warns Artificial Intelligence
Could End Mankind, BBC NEWS (Dec. 2, 2014), http://www.bbc.com/news/technology-30290540
[https://perma.cc/S9Y5-ZK7W] (“The development of full artificial intelligence could spell the
end of the human race.”).
15. See, e.g., Berkeley J. Dietvorst, Joseph P. Simmons & Cade Massey, Algorithm Aversion:
People Erroneously Avoid Algorithms After Seeing Them Err, 144 J. EXPERIMENTAL PSYCH. 114,
114 (2015); Benjamin Chen, Alexander Stremitzer & Kevin Tobia, Having Your Day in Robot
Court 4 (UCLA Pub. L., Rsch. Paper 21-20, May 7, 2021), https://papers.ssrn.com/sol3/papers.cfm
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
simply trust machines less than they trust humans, even when the
machines are shown to be more accurate and fairer. Humans may be
generally less forgiving when machines make mistakes than when
humans do.16 Perhaps unsurprisingly, some commentators now even
consider whether governments must honor a “right to a human
decision.”17
Still, critics of digital algorithms do express concerns that merit
consideration.18 It is far from imagined that automated digital systems
can suffer from biases, lead to controversies, or precipitate other
problems from the way humans design and use them.19 Yet too often
critics dismiss machine learning categorically, as if the mere existence
of any imperfections means that artificial intelligence (“AI”) should
never be used. Such critics can make it seem as if machine-learning
algorithms produce problems that are entirely new or distinctively
complex, inscrutable, or susceptible to bias. Unfortunately, the
complaints leveled against digital algorithms are neither truly
distinctive nor entirely new. Human decision-making is prone to many
of the same kinds of problems.20
Any meaningful assessment of AI in the public sector must
therefore start with an acknowledgment that government as it exists
today is already grounded in a set of imperfect algorithms. These existing
algorithms are inherent in human decision-making. The human brain
has its own internal wiring that might be said to operate like a complex
algorithmic system in certain respects. Neural networks—one category
of machine-learning algorithms—even draw their name from the
physical structures underlying human cognition. In addition to the
21. As the U.S. Supreme Court has acknowledged, “[i]t is an unalterable fact that our
judicial system, like the human beings who administer it, is fallible.” Herrera v. Collins, 506 U.S.
390, 415 (1993). A particularly salient example of fallibility in public administration can be found
in the human misjudgments in response to the COVID-19 crisis. See generally MICHAEL LEWIS,
THE PREMONITION: A PANDEMIC STORY 85, 160–85, 295 (2021) (chronicling misperceptions and
missteps that impeded successful public health responses). It is possible to identify a vast array of
other failures in human-driven government in recent years. See, e.g., PAUL C. LIGHT, A CASCADE
OF FAILURES: WHY GOVERNMENT FAILS, AND HOW TO STOP IT 3–7 (2014), https://www.brook
ings.edu/wp-content/uploads/2016/06/Light_Cascade-of-Failures_Why-Govt-Fails.pdf [https://
perma.cc/HS79-NDRP]. The law itself—a product of human decision-making—is said to be
riddled with incoherencies in its substance and implementation. See, e.g., LEO KATZ, WHY THE
LAW IS SO PERVERSE (2011); Cass R. Sunstein, Daniel Kahneman, David Schkade & Ilana Ritov,
Predictably Incoherent Judgments, 54 STAN. L. REV. 1153, 1154 (2002); MAX H. BAZERMAN &
ANN E. TENBRUNSEL, BLIND SPOTS: WHY WE FAIL TO DO WHAT’S RIGHT AND WHAT TO DO
ABOUT IT 96–111 (2011). For further discussion of the limitations of human decision-making, see
infra Part I.
22. See infra Part II.B.
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
automated decision systems can fall prey to problems too; they are,
after all, designed and operated by humans subject to the limitations
described in Part I.
That machine-learning algorithms could fail means that their
design and use, especially by governments, should be carried out with
due care and attentive oversight. The aim should be to develop and
deploy machine-learning algorithms that can improve on the status
quo—that is, do a better job than humans of avoiding errors, biases,
and other problems. Achieving that aim calls for smart human
decision-making about when and how to rely on digital algorithms.
Part III thus presents general considerations to help guide public
officials seeking to make sound choices about when and how to use
digital algorithms. In addition to focusing officials’ attention on the
extent to which a shift to digital algorithms will improve upon the status
quo, we emphasize in Part III the need to consider whether a new use
of digital algorithms would likely satisfy key preconditions for
successful deployment of machine learning and whether a system
driven by digital algorithms would actually deliver better outcomes.
We also emphasize the need to ensure adequate planning, careful
procurement of private contractor services, and appropriate
opportunities for public participation in the design, development, and
ongoing oversight of machine-learning systems.
23. For recent syntheses of such research, see generally RICHARD H. THALER,
MISBEHAVING: THE MAKING OF BEHAVIORAL ECONOMICS (2015) and DANIEL KAHNEMAN,
THINKING, FAST AND SLOW (2011). For a synthesis of research on cognitive biases in legal
decision-making, see Alicia Lai, Brain Bait: Effects of Cognitive Biases on Scientific Evidence in
Legal Decision-Making 8–12 (2018) (A.B. thesis, Princeton University) (on file with the Princeton
University Library).
24. DAVID EPSTEIN, RANGE: WHY GENERALISTS TRIUMPH IN A SPECIALIZED WORLD 22
(2019). Kasparov made this declaration after being defeated by the IBM supercomputer Deep
Blue. See id.
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
A. Physical Limitations
Physical limitations constitute biological ceilings of human
performance. As children mature into adults, their brain circuitry is
strengthened with use—but they can also be weakened by neglect,
injury, illness, or advanced age. Overall, human decision-making is
naturally limited by biological constraints. We highlight here physical
qualities that can hamper human decision-making.
Memory. Neuroscientists have estimated that human memory has
the capacity to store as much as 108432 bits of information—making the
human brain a high-capacity storage device.27 Nevertheless, practical
decision-making often depends less on long-term aggregated memory
and more on short-term working memory. Typical human working
memory is limited to closer to four variables.28 Decision-makers who
25. We adopt these categorizations simply for ease of presentation, not because they are
airtight or comprehensive. Nothing of consequence hinges on the categories into which we have
grouped these human limitations.
26. We build on others who have recognized that the limitations of human decision-making
can impede sound administrative policymaking. See, e.g., Susan E. Dudley & Zhoudan Xie,
Nudging the Nudger: Toward a Choice Architecture for Regulators, 16 REGUL. & GOVERNANCE
261, 261 (2022).
27. Yingxu Wang, Dong Liu & Ying Wang, Discovering the Capacity of Human Memory, 4
BRAIN & MIND 189, 193–96 (2003). Others have obtained estimates around a billion bits—much
lower than 108432—but still substantial. Thomas K. Landauer, How Much Do People Remember?
Some Estimates of the Quantity of Learned Information in Long-Term Memory, 10 COGNITIVE
SCI. 477, 491 (1986).
28. See Nelson Cowan, The Magical Number 4 in Short-Term Memory: A Reconsideration
of Mental Storage Capacity, 24 BEHAV. BRAIN SCI. 87, 114 (2001). Cowan’s work synthesizes a
vast literature that usually takes as its starting point George A. Miller, The Magical Number
Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information, 63 PSYCH.
REV. 81 (1956). As extensive commentary published in conjunction with Cowan’s article itself
indicates, the relevant literature on memory is vast and the issues are complex. We are, by
necessity, simplifying issues and distilling relevant research here and throughout our presentation
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
of the various limitations on human decision-making discussed in Part I. Although the precise
characterization of memory capacity may vary across studies, it is clear that “[t]here are real
biological limits to how much information we can process at any given time.” LEIDY KLOTZ,
SUBTRACT: THE UNTAPPED SCIENCE OF LESS 226 (2021).
29. Cf. ATUL GAWANDE, THE CHECKLIST MANIFESTO 13 (2009) (“[T]he volume and
complexity of what we know has exceeded our individual ability to deliver its benefits correctly,
safely, or reliably.”).
30. See PAUL CERRATO & JOHN HALAMKA, REINVENTING CLINICAL DECISION SUPPORT 1, 6 (2019).
31. Surgical Safety Checklist, WORLD HEALTH ORG. (2020), https://apps.who.int/iris/
bitstream/handle/10665/44186/9789241598590_eng_Checklist.pdf [https://perma.cc/HZ7C-6TTA].
32. Eric Nagourney, Checklist Reduces Deaths in Surgery, N.Y. TIMES (Jan. 14, 2009),
https://www.nytimes.com/2009/01/20/health/20surgery.html?_r=1&ref=health [https://perma.cc/
59QW-YQBK] (showing that deaths decline by more than 40 percent and complications by one third).
33. Paula Alhola & Päivi Polo-Kantola, Sleep Deprivation: Impact on Cognitive
Performance, 3 NEUROPSYCHIATRIC DISEASE & TREATMENT 553, 553, 556 (2007).
34. See, e.g., NAT’L SAFETY COUNCIL, CALCULATING THE COST OF POOR SLEEP ~
METHODOLOGY 2 (2017) (“Collectively, costs attributable to sleep deficiency in the U.S.
exceeded $410 billion in 2015, equivalent to 2.28% of gross domestic product.”); Katrin Uehli,
Amar J. Mehta, David Miedinger, Kerstin Hug, Christian Schindler, Edith Holsboer-Trachsler,
Jörg D. Leuppi & Nino Künzli, Sleep Problems and Work Injuries: A Systematic Review and Meta-
Analysis, 18 SLEEP MED. REV. 61, 61 (2014).
35. See, e.g., Sarah Kessler & Lauren Hirsch, Wall Street’s Sleepless Nights, N.Y. TIMES
(Mar. 27, 2021), https://www.nytimes.com/2021/03/27/business/dealbook/banker-burnout.html
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
average of three to four days every week.36 Fatigue and related stresses
impair workplace decisions and performance. The National Institute
for Occupational Safety and Health notes that “[h]igh levels of fatigue
can affect any worker in any occupation or industry with serious
consequences for worker safety and health.”37
Fatigue has been documented to impair human behavior and
decision-making in other contexts as well. According to research by the
National Transportation Safety Board, 40 percent of highway accidents
involve fatigue.38 Fatigue among orthopedic surgical residents
increases risks of medical error by 22 percent.39
In the legal system, the treatment that individuals receive also
appears to be affected by fatigue-related vagaries of human judgment.
One study tracked judicial rulings on parole decisions across three
decision sessions, each punctuated by food breaks.40 At the start of
each session, the well-rested judges issued approximately 65 percent
favorable decisions on average, which dropped to zero as the judges
fatigued.41 After each food break, the rate reset at 65 percent and the
cycle continued.42
Aging. As people age, the brain shrinks in volume, and memory
and information processing speeds decline.43 Many older individuals
succumb to neurodegenerative disorders, such as Alzheimer’s disease
44. See Yujun Hou, Xiuli Dan, Mansi Babbar, Yong Wei, Steen G. Hasselbalch, Deborah
L. Croteau & Vilhelm A. Bohr, Ageing as a Risk Factor for Neurodegenerative Disease, 15
NATURE REVS. 565, 565 (2019); ALZHEIMER’S ASS’N, 2021 ALZHEIMER’S DISEASE FACTS AND
FIGURES: SPECIAL REPORT: RACE, ETHNICITY AND ALZHEIMER’S IN AMERICA 19 (2021),
https://www.alz.org/media/documents/alzheimers-facts-and-figures.pdf [https://perma.cc/V879-HT67].
45. Joseph Goldstein, Life Tenure for Federal Judges Raises Issues of Senility, Dementia,
PROPUBLICA (Jan. 18, 2011), https://www.propublica.org/article/life-tenure-for-federal-judges-
raises-issues-of-senility-dementia [https://perma.cc/73UW-U7S5].
46. Francis X. Shen, Aging Judges, 81 OHIO ST. L.J. 235, 238–39 (2020).
47. Such disorders include drug addiction, alcoholism, intermittent explosive disorder,
oppositional defiant disorder, and pyromania. T.W. Robbins & J.W. Dalley, Impulsivity, Risky
Choice, and Impulse Control Disorders: Animal Models, in DECISION NEUROSCIENCE: AN
INTEGRATIVE APPROACH 81, 81 (Jean-Claude Dreher & Léon Tremblay eds., 2017).
48. Table 2. 12-month Prevalence of DSM-IV/WMH-CIDI Disorders by Sex and Cohort,
HARV. MED. SCH., https://www.hcp.med.harvard.edu/ncs/ftpdir/NCS-R_12-month_Prevalence_
Estimates.pdf [https://perma.cc/EL3K-GY6T].
49. A study commissioned by the American Bar Association indicates that more than one
third of all attorneys in the United States appear to experience problematic drinking. Addiction
Recovery Poses Special Challenges for Legal Professionals, BUTLER CTR. FOR RSCH. (Mar. 16,
2017), https://www.hazeldenbettyford.org/education/bcr/addiction-research/substance-abuse-leg
al-professionals-ru-317 [https://perma.cc/8Y4Q-LZD8].
50. See Daniele Zavagno, Olga Daneyko & Rossana Actis-Grosso, Mishaps, Errors, and
Cognitive Experiences: On the Conceptualization of Perceptual Illusions, 9 FRONTIERS HUM.
NEUROSCIENCE 1, 2 (2015).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
B. Biases
Perhaps in part because of their physical limitations, humans
regularly rely on a series of cognitive shortcuts. These shortcuts may
reflect traits that have given humans evolutionary advantages. But they
can lead to systematic errors in information processing and failures of
administrative government.52 In this Section, we detail just a few of the
widely documented biases that predictably contribute to errors in
human judgment. It may be possible to counteract some of these
tendencies through what is known as debiasing—but not always and
not necessarily completely.53
Availability Heuristic. The availability heuristic or bias refers to
the human tendency to treat examples which most easily come to mind
as the most important information or the most frequent occurrences.54
When a hazard is particularly salient or frequently observed, the
51. For instance, misperceptions can contribute to misidentification of military targets. Eric
Schmitt & Anjali Singhvi, Why American Airstrikes Go Wrong, N.Y. TIMES (Apr. 14, 2017),
https://www.nytimes.com/interactive/2017/04/14/world/middleeast/why-american-airstrikes-
go-wrong.html [https://perma.cc/5LZY-BY4H]. They can also undergird conflict and miscommunication
in interactions between law enforcement and members of the public. MALCOLM GLADWELL, TALKING
TO STRANGERS: WHAT WE SHOULD KNOW ABOUT THE PEOPLE WE DON’T KNOW 342–46 (2019).
52. See, e.g., Jeffrey J. Rachlinski & Cynthia R. Farina, Cognitive Psychology and Optimal
Government Design, 87 CORNELL L. REV. 549, 553–54 (2002); Jan Schnellenbach & Christian
Schubert, Behavioral Public Choice: A Survey 1 (Inst. for Econ. Rsch., Univ. of Freiburg, Working
Paper No. 14/03, 2014); George Dvorsky, The 12 Cognitive Biases that Prevent You from Being
Rational, GIZMODO (Jan. 9, 2013), http://io9.com/5974468/the-most-common-cognitive-biases-
that-prevent-you-from-being-rational [https://perma.cc/E5YG-75DY]. This is not to say, of
course, that these biases always lead to problems. Gerd Gigerenzer, Heuristics, in HEURISTICS
AND THE LAW 17, 40–41 (Gerd Gigerenzer & Christoph Engel eds., 2006).
53. Christine Jolls & Cass R. Sunstein, Debiasing Through Law, J. LEGAL STUD. 199, 200–
02 (2006). A legal requirement that corporate boards include outside members is one example of
a debiasing strategy, as it tries to counteract confirmation bias.
54. Amos Tversky & Daniel Kahneman, Judgment Under Uncertainty: Heuristics and Biases,
185 SCI. 1124, 1127 (1974) [hereinafter Tversky & Kahneman, Judgment Under Uncertainty].
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
55. Charles G. Lord, Lee Ross & Mark R. Lepper, Biased Assimilation and Attitude
Polarization: The Effects of Prior Theories on Subsequently Considered Evidence, 37 J.
PERSONALITY & SOC. PSYCH. 2098, 2098 (1979).
56. Id.
57. Id. Some research even suggests that as humans acquire domain expertise, they can lose
flexibility with regard to problem solving, adaptation, and creative idea generation. Erik Dane,
Reconsidering the Trade-off Between Expertise and Flexibility: A Cognitive Entrenchment
Perspective, 35 ACAD. MGMT. REV. 579, 579 (2010).
58. Martin Baekgaard, Julian Christensen, Casper Mondrup Dahlmann, Asbjørn Mathiasen
& Niels Bjørn Grund Petersen, The Role of Evidence in Politics: Motivated Reasoning and
Persuasion Among Politicians, 49 BRIT. J. POL. SCI. 1117, 1124 (2019).
59. Id. at 1125.
60. Id. at 1127.
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
61. Tversky & Kahneman, Judgment Under Uncertainty, supra note 54, at 1128.
62. See id.
63. KENNETH A. KRIZ, ANCHORING AND ADJUSTMENT BIASES AND LOCAL GOVERNMENT
REFERENDA LANGUAGE 9, 14 (2014), https://www.ntanet.org/wp-content/uploads/proceedings/2014/
078-kriz-anchoring-adjustment-biases-local-government.pdf [https://perma.cc/E6FR-WDCT].
64. Cade Massey & George Wu, Detecting Regime Shifts: The Causes of Under- and
Overreaction, 51 MGMT. SCI. 932, 933 (2005).
65. Id. at 945; Mirko Kremer, Brent Moritz & Enno Siemsen, Demand Forecasting
Behavior: System Neglect and Change Detection, 57 MGMT. SCI. 1827, 1838 (2011).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
in the first place.66 Other research shows that people are more likely to
recall the positive attributes of what they possess, focusing on reasons
to keep what they already have, while they are more likely to recall the
negative attributes of what they do not possess, focusing on the reasons
not to buy into change.67
A related behavioral tendency, known as loss aversion, also
reinforces a present bias. Humans dislike losses more than they like
corresponding gains.68 People also tend to disregard potential gains and
focus on the losses associated with an activity. Overall, they face
challenges in assessing risks, with difficulties in processing and
assigning meaning to probabilities, large numbers, and exponential
growth.69 Subtle changes in the framing of information can affect
people’s evaluation of risks—even ones that are quantitatively identical.
For example, if a health policy is framed in terms of number of lives
saved, people are more conservative and risk-averse; if the same policy
is framed in terms of number of lives lost, people are much more willing
to take risks to try to reduce that number.70
Practically speaking, these tendencies help explain why
“preventing losses . . . looms larger in government’s objective
function.”71 Governments are less likely to behave aggressively when
doing so would produce gains than when the same behavior might
66. Daniel Kahneman, Jack L. Knetsch & Richard H. Thaler, Anomalies: The Endowment
Effect, Loss Aversion, and Status Quo Bias, 5 J. ECON. PERSPS. 193, 196 (1991).
67. Michael A. Strahilevitz & George Loewenstein, The Effect of Ownership History on the
Valuation of Objects, 25 J. CONSUMER RSCH. 276, 285 (1998).
68. Daniel Kahneman & Amos Tversky, An Analysis of Decision Under Risk, 47
ECONOMETRICA 263, 266 (1979).
69. For a useful collection of essays on this general problem, see generally NUMBERS AND
NERVES: INFORMATION, EMOTION, AND MEANING IN A WORLD OF DATA (Scott Slovic & Paul
Slovic eds., 2015). People also tend to engage in hyperbolic discounting, preferring immediate
rewards to future ones of equal present value. See, e.g., J.D. Trout, The Psychology of
Discounting: A Policy of Balancing Biases, 21 PUB. AFF. Q. 201, 204 (2007); Jess Benhabib,
Alberto Bisin & Andrew Schotter, Present-Bias, Quasi-Hyperbolic Discounting, and Fixed Costs,
69 GAMES & ECON. BEHAV. 205, 222 (2010).
70. Alexander J. Rothman & Peter Salovey, Shaping Perceptions to Motivate Healthy
Behavior: The Role of Message Framing, 121 PSYCH. BULL. 3, 4–5 (1997).
71. Caroline Freund & Çağlar Özden, Trade Policy and Loss Aversion, 98 AM. ECON. REV.
1675, 1675 (2008); see also Robert Jervis, Political Implications of Loss Aversion, 13 POL. PSYCH.
187, 187 (1992) [hereinafter Jervis, Political Implications] (“People are loss-averse in the sense
that losses loom larger than the corresponding gains.”); Jean Galbraith, Treaty Options: Towards
a Behavioral Understanding of Treaty Design, 53 VA. J. INT’L L. 309, 350, 355 (2013) (“Individuals
tend to weigh losses more than gains in decision-making, and so may weigh the risks of switching
from a default option more heavily than the possible gains.”).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
Racial and Gender Biases. As with the various cognitive biases noted
above, race and gender biases can affect human judgment—even without
conscious animus. Implicit biases are a “distorting lens that’s a product of
both the architecture of our brain and the disparities in our society.”81
Perceptions about race can be shaped by subtle cues that appear in
people’s surroundings. One study exposed adult subjects to a series of
flashes of light containing letters that were too rapid to be consciously
perceived.82 One group was exposed to flashes with words related to
crime, such as “arrest” and “shoot,” while the other group was exposed to
jumbled letters.83 But after these flashes, subjects were shown two human
faces simultaneously—one Black, one white. The subjects exposed to the
crime-related words spent more time staring at the Black face.84
In the context of the legal system, studies show evidence of racial bias
in the conduct of prosecutors in determining convictions85 and federal
sentences.86 Racial disparities have been identified as well in the decisions
of defense attorneys,87 police officers,88 judges,89 and juries.90 Similarly,
Restraint of Glasmann, 286 P.3d 673, 701–03 (Wash. 2012) (en banc) and State v. Robinson, No.
47398-1-I, 2002 WL 258038, at *3 (Wash. Ct. App. Feb. 25, 2002).
81. JENNIFER L. EBERHARDT, BIASED: UNCOVERING THE HIDDEN PREJUDICE THAT
SHAPES WHAT WE SEE, THINK, AND DO 6 (2019); see also O. Pascalis, L. S. Scott, D. J. Kelly, R. W.
Shannon, E. Nicholson, M. Coleman & C. A. Nelson, Plasticity of Face Processing in Infancy, 102 PROC.
NAT. ACAD. SCI. 5297, 5300 (2005) (“[E]xperience with faces early in life may influence and shape the
development of a face prototype. The development of this prototype leads to biases in discriminating own-
race and own-species faces compared with other-race and other-species faces.”).
82. EBERHARDT, supra note 81, at 58–60.
83. Id.
84. Id.
85. Carly W. Sloan, Racial Bias by Prosecutors: Evidence from Random Assignment 30
(2019) (unpublished manuscript) (on file with author).
86. M. Marit Rehavi & Sonja B. Starr, Racial Disparity in Federal Criminal Sentences, 122
J. POL. ECON. 1320, 1320 (2014).
87. See, e.g., David S. Abrams & Albert H. Yoon, The Luck of the Draw: Using Random Case
Assignment To Investigate Attorney Ability, 74 U. CHI. L. REV. 1145, 1145 (2007); see also Jeff Adachi,
Public Defenders Can Be Biased, Too, and It Hurts Their Non-White Clients, WASH. POST (June 7, 2016),
https://www.washingtonpost.com/posteverything/wp/2016/06/07/public-defenders-can-be-biased-too-and-
it-hurts-their-non-white-clients [https://perma.cc/QH3P-ZW8B] (“A public defender may try harder for a
client that he or she perceives as more educated or likely to be successful because of their race.”).
88. Kate Antonovics & Brian G. Knight, A New Look at Racial Profiling: Evidence from the
Boston Police Department, 91 REV. ECON. & STAT. 163, 163 (2009).
89. See Briggs Depew, Ozkan Eren & Naci Mocan, Judges, Juveniles, and In-Group Bias, 60 J.L. &
ECON. 209, 209 (2017) (finding evidence of negative in-group bias by judges sentencing juvenile offenders).
90. See Shamena Anwar, Patrick Bayer & Randi Hjalmarsson, The Impact of Jury Race in
Criminal Trials, 127 Q.J. ECON. 1017, 1017 (2012) (finding that all-white juries convict Black
defendants 16 percent more frequently than they convict white defendants).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
C. Group Challenges
To these various problems and limitations of individual decision-
making can be added a series of distinctive pathologies associated with
group decision-making—the kind of decision-making that prevails
throughout much of government.95 The group setting does not
necessarily eliminate the problematic physical and cognitive features
that can make individual decision-making go awry. On the contrary,
research indicates that it is commonly the case that “groups exaggerate
this tendency.”96 Moreover, the group setting adds social dynamics that
can create additional problems. It was far from accidental that Otto
91. See, e.g., RICHARD ROTHSTEIN, THE COLOR OF LAW: A FORGOTTEN HISTORY OF
HOW OUR GOVERNMENT SEGREGATED AMERICA 39 (2017) (explaining how, in the mid-
twentieth century especially, “federal, state, and local governments purposely created segregation
in every metropolitan area of the nation”); JESSICA TROUNSTINE, SEGREGATION BY DESIGN:
LOCAL POLITICS AND INEQUALITY IN AMERICAN CITIES 3 (2018) (noting how segregation
emerged from “local governments systematically institutionaliz[ing] discriminatory approaches
to the maintenance of housing values and production of public goods”).
92. The racial makeup of the heads of many administrative agencies has also failed to reflect
society’s racial makeup. Chris Brummer, What Do the Data Reveal About (the Absence of Black)
Financial Regulators? 8–9 (Brookings Econ. Stud., Working Paper, 2020), https://www.brook
ings.edu/research/what-do-the-data-reveal-about-the-absence-of-black-financial-regulators
[https://perma.cc/LZ4Q-4TUS].
93. U.S. GEN. ACCT. OFF., GAO/HRD-92-56, SOCIAL SECURITY: RACIAL DIFFERENCE IN
DISABILITY DECISIONS WARRANTS FURTHER INVESTIGATION 4 (1992); Erin M. Godtland,
Michele Grgich, Carol Dawn Petersen, Douglas M. Sloane & Ann T. Walker, Racial Disparities
in Federal Disability Benefits, 25 CONTEMP. ECON. POL’Y 27, 27 (2007).
94. See JILL A. FISHER, ADVERSE EVENTS: RACE, INEQUALITY, AND THE TESTING OF
NEW PHARMACEUTICALS 4 (2020).
95. See, e.g., David P. Redlawsk & Richard R. Lau, Behavioral Decision-Making, in
OXFORD HANDBOOK OF POLITICAL PSYCHOLOGY 1, 1–4 (Leonie Huddy, David O. Sears & Jack
S. Levy eds., 2d ed. 2013) (discussing the behavioral tendencies of voters’ decision-making).
96. Verlin B. Hinsz, R. Scott Tindale & David A. Vollrath, The Emerging Conceptualization
of Groups as Information Processors, 121 PSYCH. BULL. 43, 49 (1997).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
97. See Paul C. Nutt, Surprising but True: Half the Decisions in Organizations Fail, 13 ACAD.
MGMT. EXEC. 75, 75 (1999).
98. See IRVING L. JANIS, VICTIMS OF GROUPTHINK 3 (1972). The term was coined in
George Orwell’s 1984 and first used to describe “rationalized conformity” in government
organizations. William H. Whyte, Jr., Groupthink, FORTUNE (1952), https://fortune.com/2012/
07/22/groupthink-fortune-1952 [https://perma.cc/JCN7-Z63E].
99. Whyte, Jr., supra note 98.
100. Id.
101. Id.
102. See IRVING L. JANIS, CRUCIAL DECISIONS: LEADERSHIP IN POLICYMAKING AND
CRISIS MANAGEMENT 47, 57–58 (1989); EM GRIFFIN, A FIRST LOOK AT COMMUNICATION THEORY
219–28 (1991). For related discussion, see REPORT BY THE PRESIDENTIAL COMMISSION ON THE
SPACE SHUTTLE CHALLENGER ACCIDENT 83–119 (1986), https://science.ksc.nasa.gov/shuttle/
missions/51-l/docs/rogers-commission/Rogers_Commission_Report_Vol1.pdf [https://perma.cc/
P29Y-SQ2D] (chronicling flawed group decision-making that led to the catastrophic launch of
the Challenger space shuttle) and RICHARD E. NEUSTADT & ERNEST R. MAY, THINKING IN
TIME: THE USES OF HISTORY FOR DECISION-MAKERS 32–33 (1986) (analyzing examples of
failed group decision-making across multiple federal administrations). We recognize, of course,
that groupthink may not always be the sole driver of organizational failure. Cf. DIANE VAUGHAN,
THE CHALLENGER LAUNCH DECISION: RISKY TECHNOLOGY, CULTURE, AND DEVIANCE AT
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
NASA 404 (2d ed. 2016) (arguing that “many of the elements of” failure in the Challenger tragedy
“have explanations that go beyond the assembled group to cultural and structural sources”). Even
Janis recognized that, in situations suffering from groupthink, “other causal factors” may well be
at play. JANIS, supra, at 275.
103. Richard Coker, Coronavirus Can Only Be Beaten If Groups Such as Sage Are
Transparent and Accountable, GUARDIAN (Apr. 27, 2020), https://www.theguardian.com/comm
entisfree/2020/apr/27/coronavirus-sage-scientific-groupthink [https://perma.cc/NNR4-K3DB]; see
also Howard Kunreuther & Paul Slovic, Learning from the COVID‐19 Pandemic to Address
Climate Change, MGMT. & BUS. REV. (Winter 2021), https://mbrjournal.com/2021/01/26/
learning-from-the-covid-19-pandemic-to-address-climate-change [https://perma.cc/52FV-EG3G]
(noting how a “tend[ency] to follow the herd, allowing [their] choices to be influenced by other
people’s behavior, especially when we feel uncertain,” influenced key decision-makers’ responses
to the COVID-19 pandemic).
104. See Tevi Troy, All the President’s Yes-Men, WALL ST. J. (Aug. 22, 2021), https://www.
wsj.com/articles/president-decision-making-biden-kennedy-johnson-taliban-afghanistan-
bay-of-pigs-vietnam-saigon-blinkin-sullivan-11629641380 [https://perma.cc/4Y7G-HTEU].
105. For a discussion of the lowest common denominator effect, see Cary Coglianese, Is
Consensus an Appropriate Basis for Regulatory Policy?, in ENVIRONMENTAL CONTRACTS:
COMPARATIVE APPROACHES TO REGULATORY INNOVATION IN THE UNITED STATES AND
EUROPE 93, 93–113 (Eric Orts & Kurt Deketelaere eds., 2001).
106. For ways that groups can fail by trying to make everyone happy, see Cary Coglianese, Is
Satisfaction Success? Evaluating Public Participation in Regulatory Policymaking, in THE
PROMISE AND PERFORMANCE OF ENVIRONMENTAL CONFLICT RESOLUTION 69, 69–70
(Rosemary O’Leary & Lisa Bingham eds., 2003).
107. Michael D. Cohen, James G. March & Johan P. Olsen, A Garbage Can Model of
Organizational Choice, 17 ADMIN. SCI. Q. 1, 1 (1972).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
decisions. Ultimately, “the nature of the choice, the time [the group] takes,
and the problems it solves all depend on a relatively complicated
intermeshing of elements” within the organization.108
Preference Cycling. Aggregating preferences within groups can
also be relatively erratic. According to Arrow’s impossibility theorem,
when individual preferences are arrayed across more than a single
dimension, there may be no clear and stable way to aggregate
individual preferences without violating mathematical principles of
transitivity.109 In other words, although a majority of a group may favor
option A over option B, and also favor option B over option C, if the
group is faced just with a decision that involves a choice just between
A and C, it may well rationally choose C. Outcomes “cycle” because
the choice that satisfies a majority of group members’ preferences can
shift depending on the potentially arbitrary way that alternatives are
pitted against each other (A versus B, B versus C, or A versus C).110
Free Riding or Social Loafing. Members of a group can be less
motivated when performing tasks along with other group members.
Situations where individuals are not individually identifiable lead to
lower accountability and responsibility—or social loafing. In one
experiment, researchers found that when asked to perform physically
exerting tasks of clapping and shouting, participants’ efforts sizably
decreased when performing in groups as compared to performing
alone.111 This effect also has been documented in industrial production,
bystander intervention, and participation in church activities.112 When
individuals need to work cooperatively to achieve collective action,
they have the incentive to free ride on the efforts of others—which
ultimately undersupplies needed collective goods.113
* * *
tedious, voluminous tasks and to parse through data to extract patterns. E.g., Coglianese & Ben
Dor, supra note 8, at 823–27; ENGSTROM ET AL., supra note 9, at 9–11; KEVIN C. DESOUZA,
ARTIFICIAL INTELLIGENCE IN THE PUBLIC SECTOR: A MATURITY MODEL 7–8 (2021),
https://www.businessofgovernment.org/sites/default/files/Artificial%20Intelligence%20in%20th
e%20Public%20Sector_0.pdf [https://perma.cc/2YJ4-7VLS].
126. See generally MICHAEL KEARNS & AARON ROTH, THE ETHICAL ALGORITHM: THE
SCIENCE OF SOCIALLY AWARE ALGORITHM DESIGN (2019) (discussing ways that digital science
can incorporate adherence to ethical principles into machine-learning technologies).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
127. Coglianese & Lehr, Regulating by Robot, supra note 1, at 1156–57; Lehr & Ohm, supra
note 1, at 655.
128. Our discussion of machine learning here is, by necessity, both brief and basic, and
machine-learning algorithms can fall into additional categories, such as semi-supervised and
reinforcement learning algorithms.
129. Coglianese & Lehr, Regulating by Robot, supra note 1, at 1158 n.37.
130. Lehr & Ohm, supra note 1, at 676.
131. Typically, machine-learning analysis does not support causal claims. But sometimes it
can be incorporated into, and assist with, broader analysis of causal connections. For related
discussion, see Sendhil Mullainathan & Jann Spiess, Machine Learning: An Applied Econometric
Approach, 31 J. ECON. PERSPS. 87, 96 (2017).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
132. Aditya Mishra, Metrics To Evaluate Your Machine Learning Algorithm, TOWARDS
DATA SCI. (Feb. 24, 2018), https://towardsdatascience.com/metrics-to-evaluate-your-machine-
learning-algorithm-f10ba6e38234 [https://perma.cc/LB64-8J4L].
133. Ricvan Dana Nindrea, Teguh Aryandono, Lutfan Lazuardi & Iwan Dwiprahasto,
Diagnostic Accuracy of Different Machine Learning Algorithms for Breast Cancer Risk
Calculation: A Meta-Analysis, 19 ASIAN PAC. J. CANCER PREVENTION 1747, 1747 (2018),
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6165638/pdf/APJCP-19-1747.pdf [https://perma.cc/
Q3KU-Z7JZ] (finding that an algorithm known as Super Vector Machine was superior in its
forecasting ability).
134. Amanda Frost, Overvaluing Uniformity, 94 VA. L. REV. 1567, 1568–69 (2008).
135. We are assuming here digital algorithms that do not have stochasticity—or
randomness—deliberately programmed into them. See generally James C. Spall, Stochastic
Optimization, in HANDBOOK OF COMPUTATIONAL STATISTICS: CONCEPTS AND METHODS 173
(James E. Gentle, Wolfgang Karl Härdle & Yuichi Mori eds., 2d ed. 2012).
136. Yash Raj Shrestha, Shiko M. Ben-Menahem & Georg von Krogh, Organizational
Decision-Making Structures in the Age of Artificial Intelligence, 6 CAL. MGMT. REV. 66, 68, 70
(2019).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
137. FOOD & DRUG ADMIN., FDA COMMISSIONER’S FELLOWSHIP PROGRAM (2011),
https://www.fda.gov/media/83569/download [https://perma.cc/8K9A-K337].
138. STAFFS OF THE CFTC & SEC, FINDINGS REGARDING THE MARKET EVENTS OF MAY 6, 2010,
at 2–3 (2010), https://www.sec.gov/news/studies/2010/marketevents-report.pdf [https://perma.cc/MB6Z-
R3CD].
139. See Cary Coglianese, Optimizing Regulation for an Optimizing Economy, 4 J.L. & PUB.
AFFS. 1, 1–2 (2018) [hereinafter Coglianese, Optimizing].
140. Håkon Hapnes Strand, How Do Machine Learning Algorithms Handle Such Large Amounts
Of Data?, FORBES (Apr. 10, 2018), https://www.forbes.com/sites/quora/2018/04/10/how-do-machine-
learning-algorithms-handle-such-large-amounts-of-data/ [https://perma.cc/FVU7-VMDA].
141. DAVID DEBARR & MAURY HARWOOD, IRS, RELATIONAL MINING FOR COMPLIANCE
RISK 177–78 (2004), https://www.irs.gov/pub/irs-soi/04debarr.pdf [https://perma.cc/UA6X-QJRZ].
142. Jory Heckman, How GSA Turned an Automation Project into an Acquisition Time-Saver, FED.
NEWS NETWORK (Mar. 29, 2018), https://federalnewsnetwork.com/technology-main/2018/03/how-gsa-
turned-an-automation-project-into-a-acquisition-time-saver [https://perma.cc/H9LW-L5N2].
143. U.S. PAT. & TRADEMARK OFF., FY 2019 UNITED STATES PATENT AND TRADEMARK
OFFICE PERFORMANCE AND ACCOUNTABILITY REPORT 20 (2020), https://www.uspto.gov/sites/
default/files/documents/USPTOFY19PAR.pdf [https://perma.cc/2UMX-86ZG]; Lea Helmers,
Franziska Horn, Franziska Biegler, Tim Oppermann & Klaus-Robert Müller, Automating the
Search for a Patent’s Prior Art with a Full Text Similarity Search, PLOS ONE 1, 1 (Mar. 4, 2019).
144. ENGSTROM ET AL., supra note 9, at 59–60; David A. Bray, An Update on the Volume of
Open Internet Comments Submitted to the FCC, FED. COMMC’NS COMM’N (Sept. 17, 2014),
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
https://www.fcc.gov/news-events/blog/2014/09/17/update-volume-open-internet-comments-subm
itted-fcc [https://perma.cc/ZH58-UEQC].
145. For an overview of the relative advantages of digital algorithms, see generally AJAY
AGRAWAL, JOSHUA GANS & AVI GOLDFARB, PREDICTION MACHINES: THE SIMPLE
ECONOMICS OF ARTIFICIAL INTELLIGENCE (2018).
146. E.g., Soham Banerjee, Pradeep Kumar Singh & Jaya Bajpai, A Comparative Study on
Decision-Making Capability Between Human and Artificial Intelligence, in 652 NATURE INSPIRED
COMPUTING 203, 209 (Bijaya Ketan Panigrahi, M.N. Hoda, Vinod Sharma & Shivendra Goel eds.,
2018).
147. See MAX TEGMARK, LIFE 3.0: BEING HUMAN IN THE AGE OF ARTIFICIAL
INTELLIGENCE 105–06 (2017) (discussing the information processing advantages that digital
algorithms hold over human judges).
148. Admittedly, this consistency also leads to a concern about digital algorithms: if they are
wrongly designed, they can put in place flaws or biases that will then apply across all cases, as
opposed to just some, as with an inconsistently distributed system dependent on human decision-
makers. Consistency, in other words, is of little virtue if it only leads to ineffectual or problematic
results delivered consistently. Yet if there exist some humans who can make accurate and unbiased
decisions in a given context, that itself provides reason to think that humans can design digital
systems to yield results that are both high quality and consistent. The key is ensuring that the human
decision-makers who design digital algorithmic systems are smart and make high quality decisions
about the design and operation of digital algorithms. In much the same way, a system that uses a
consistent approach may also be easier to modify and fix when errors or biases arise.
149. But new research seems continually to draw into question such claims about the inherent
superiority of humans at given tasks. Development of “neuromorphic” hardware that mimics the
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
human brain is starting to run brain-like software. Sara Reardon, Artificial Neurons Compute
Faster Than the Human Brain, NATURE (Jan. 26, 2018), https://www.nature.com/articles/d41586-
018-01290-0 [https://perma.cc/5H9N-56FF].
150. P’SHIP FOR PUB. SERV. & IBM CTR. FOR BUS. GOV’T, THE FUTURE HAS BEGUN:
USING ARTIFICIAL INTELLIGENCE TO TRANSFORM GOVERNMENT 8 (2018), https://ourpub
licservice.org/wp-content/uploads/2018/01/0c1b8914d59b94dc0a5115b739376c90-1515436519.pdf
[https://perma.cc/6M24-EJHR] [hereinafter THE FUTURE HAS BEGUN].
151. Automated Coding of Injury and Illness Data, U.S. BUREAU OF LAB. STAT. (Sept. 21,
2020), https://www.bls.gov/iif/autocoding.htm [https://perma.cc/CJX6-ZRRN]; see also THE
FUTURE HAS BEGUN, supra note 150 (discussing BLS reliance on AI to assist with coding data).
152. See ENGSTROM ET AL., supra note 9, at 854 (“Managed well, algorithmic governance
tools can modernize public administration, promoting more efficient, accurate, and equitable
forms of state action.”).
153. DANIEL KAHNEMAN, OLIVIER SIBONY & CASS R. SUNSTEIN, NOISE: A FLAW IN
HUMAN JUDGMENT 336 (2021) (“A great deal of evidence suggests that algorithms can
outperform human beings on whatever combination of criteria we select.”).
154. See, e.g., David Silver et al., Mastering the Game of Go with Deep Neural Networks and
Tree Search, 529 NATURE 484, 488 (2016) (reporting that the AlphaGo computer program beat a
human champion in five straight games).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
155. Philipp Tschandl et al., Comparison of the Accuracy of Human Readers Versus Machine-
Learning Algorithms for Pigmented Skin Lesion Classification: An Open, Web-Based, International,
Diagnostic Study, 20 LANCET ONCOLOGY 938, 943 (2019). But see Taku Harada et al., A Perspective
from a Case Conference on Comparing the Diagnostic Process: Human Diagnostic Thinking vs.
Artificial Intelligence (AI) Decision Support Tools, INT’L J. ENV’T RSCH. & PUB. HEALTH (2020),
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7504543 [https:// perma.cc/2ZDD-RJX4].
156. Susan Wharton Gates, Vanessa Gail Perry & Peter M. Zorn, Automated Underwriting
in Mortgage Lending: Good News for the Underserved?, 13 HOUS. POL’Y DEBATE 369, 370 (2002).
157. Hamsa Bastani, Kimon Drakopoulos, Vishal Gupta, Jon Vlachogiannis, Christos
Hadjicristodoulou, Pagona Lagiou, Gkikas Magiorkinis, Dimitrios Paraskevis & Sotirios
Tsiodras, Efficient and Targeted COVID-19 Border Testing Via Reinforcement Learning, 599
NATURE 108, 108 (2021).
158. Id.
159. Miyuki Hino, Elinor Benami & Nina Brooks, Machine Learning for Environmental
Monitoring, 1 NATURE SUSTAINABILITY 583, 583–84 (2018).
160. Id.
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
161. Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig & Sendhil
Mullainathan, Human Decisions and Machine Predictions, 133 Q.J. ECON. 237, 241 (2017).
162. Id.
163. Richard A. Berk, Susan B. Sorenson & Geoffrey Barnes, Forecasting Domestic Violence:
A Machine Learning Approach To Help Inform Arraignment Decisions, 13 J. EMPIRICAL L.
STUD. 94, 105 (2016).
164. E.g., P’SHIP FOR PUB. SERV. & IBM CTR. FOR BUS. GOV’T, MORE THAN MEETS AI:
ASSESSING THE IMPACT OF ARTIFICIAL INTELLIGENCE ON THE WORK OF GOVERNMENT 3 (Feb.
27, 2019), https://ourpublicservice.org/wp-content/uploads/2019/02/More-Than-Meets-AI.pdf
[https://perma.cc/3PW3-8EVZ]; Emma Martinho-Truswell, How AI Could Help the Public
Sector, HARV. BUS. REV. (Jan. 26, 2018), https://hbr.org/2018/01/how-ai-could-help-the-public-
sector [https://perma.cc/XU7N-PJS7].
165. See supra notes 10–12 and accompanying text.
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
166. For a discussion of some of these developments, see Coglianese & Lehr, Transparency,
supra note 1, at 50–55.
167. See Sendhil Mullainathan, Biased Algorithms Are Easier To Fix than Biased People,
N.Y. TIMES (Dec. 6, 2019), https://www.nytimes.com/2019/12/06/business/algorithm-bias-fix.html
[https://perma.cc/F2L8-Z69D] (“Humans are inscrutable in a way that algorithms are not. Our
explanations for our behavior are shifting and constructed after the fact.”); John Zerilli, Alistair
Knott, James Maclaurin & Colin Gavaghan, Transparency in Algorithmic and Human Decision-
Making: Is There a Double Standard?, 32 PHIL. & TECH. 661, 663 (2019) (“[M]uch human
decision-making is fraught with transparency problems . . . .”). Michael Lewis has tellingly
compared the use, in response to pandemics, of computer-based disease models to human
judgment by experts, observing that the latter have implicitly “used models” too. LEWIS, supra
note 21, at 85. He has aptly noted that the experts relied on models or
abstractions to inform their judgments. Those abstractions just happened to be inside
their heads. Experts took the models in their minds as the essence of reality, but the
biggest difference between their models and the ones inside the computer was that their
models were less explicit and harder to check. Experts made all sorts of assumptions
about the world, just as computer models did, but those assumptions were invisible.
Id.
168. Jay Hegdé & Evgeniy Bart, Making Expert Decisions Easier To Fathom: On the
Explainability of Visual Object Recognition Expertise, FRONTIERS NEUROSCIENCE (Oct. 12,
2018), https://www.frontiersin.org/articles/10.3389/fnins.2018.00670/full [https://perma.cc/K55G-
WHEZ].
169. See WILLIAM TWINING, KARL LLEWELLYN AND THE REALIST MOVEMENT 229–31
(1973).
170. Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan & Cass R. Sunstein, Algorithms as
Discrimination Detectors, 117 PROC. NAT’L ACAD. SCI. 30096, 30097 (2020).
171. Id.; see also id. at 30100 (“It is tempting to think that human decision making is
transparent and that algorithms are opaque . . . [, but] the opposite is true—or could be true.”).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
One reason why digital algorithms can fare better in avoiding bias
is that algorithms necessarily demand the centralized compilation of
large volumes of data. As a result, the use of digital algorithms
necessarily brings with it the information needed to detect unwanted
biases.172 By comparison, governmental processes that depend on a
distributed series of one-off decisions by different humans may never
even produce the kind of aggregate data that would make unwanted
disparate treatment visible. It is typically only with big data of the kind
that fuels machine-learning algorithms that researchers can even ferret
out the discrimination that humans perpetrate.
Another reason digital algorithms fare better than human
algorithms when it comes to bias is that, once bias is detected (whether
in humans or machines), the digital algorithms can be easier to debias.
Debiasing humans, after all, can be quite difficult.173 By contrast, with
digital algorithms it will always be possible in principle to make
mathematical adjustments that reduce unwanted biases. These
adjustments can even be made while avoiding unlawful forms of
“reverse discrimination.”
Overall, digital algorithms can outperform human algorithms,
exhibiting positive qualities such as accuracy, consistency, speed, and
productivity. And even with respect to negative concerns, such as
opacity and bias, digital algorithms may again fare much better than
humans, even if they are not altogether perfect or error-free.
172. Id. at 30098 (“[Digital] algorithms . . . have the potential to become a force for social
justice by serving as powerful detectors of human discrimination.”).
173. See, e.g., Mullainathan, supra note 167 (“Changing people’s hearts and minds is no
simple matter.”); Edward H. Chang, Katherine L. Milkman, Dena M. Gromet, Robert W. Rebele,
Cade Massey, Angela L. Duckworth & Adam M. Grant, The Mixed Effects of Online Diversity
Training, 116 PROC. NAT’L ACAD. SCI. 7778, 7781 (2019) (finding modest effects at best from
diversity training, but with no effects on the individuals that “policymakers typically hope to
influence most with such interventions”). The difficulty in eliminating bias from humans should
be evident from, if nothing else, the persistence of racist and misogynistic beliefs and outcomes in
society.
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
174. Laurel Wamsley, Stanford Apologizes After Vaccine Allocation Leaves Out Nearly All
Medical Residents, NPR (Dec. 18, 2020, 8:04 PM), https://www.npr.org/sections/coronavirus-live-
updates/2020/12/18/948176807/stanford-apologizes-after-vaccine-allocation-leaves-out-nearly-all
-medical-resid [https://perma.cc/7UV4-2AWP].
175. Id.
176. Michele Gilman, AI Algorithms Intended To Root Out Welfare Fraud Often End Up
Punishing the Poor Instead, CONVERSATION (Feb. 14, 2020, 8:45 AM), https://theconversation
.com/ai-algorithms-intended-to-root-out-welfare-fraud-often-end-up-punishing-the-poor-instead-1316
25 [https://perma.cc/LRS5-KCY4].
177. Allie Gross, Update: UIA Lawsuit Shows How the State Criminalizes the Unemployed,
DET. METRO TIMES, https://www.metrotimes.com/news-hits/archives/2015/10/05/uia-lawsuit-
shows-how-the-state-criminalizes-the-unemployed [https://perma.cc/T77R-77TR] (last updated
Oct. 5, 2015, 12:06 PM); Jonathan Oosting, Michigan Refunds $21M in False Jobless Fraud Claims,
DET. NEWS (Aug. 11, 2017, 2:00 PM), https://www.detroitnews.com/story/news/politics/2017/
08/11/michigan-unemployment-fraud/104501978/ [https://perma.cc/T3B5-7BZK].
178. Sarah Cwiek, State Review: 93% of State Unemployment Fraud Findings Were Wrong,
MICH. RADIO (Dec. 16, 2016, 6:03 PM), https://www.michiganradio.org/politics-government/2016-
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
12-16/state-review-93-of-state-unemployment-fraud-findings-were-wrong [https://perma.cc/VT38-
9U6Z]. Controversy also emerged in recent years over an automated fraud detection system in
Australia. Luke Henriques-Gomes, Robodebt Class Action: Coalition Agrees To Pay $1.2bn To
Settle Lawsuit, GUARDIAN (Nov. 16, 2020, 4:42 AM), https://www.theguardian.com/australia-
news/2020/nov/16/robodebt-class-action-coalition-agrees-to-pay-12bn-to-settle-lawsuit [https://perma.
cc/33V5-JJZK]. And in the Netherlands in 2020, a court ruled that a digital system used to detect
fraud in social benefits claims violated the European Convention on Human Rights. Rb. Den
Haag 2 mei 2020, ECLI:NL:RBDHA:2020:865 (NJCM/Netherlands) (Neth.), ¶ 6.7.
179. Rachel Metz, Facial Recognition Tech Has Been Widely Used Across the US Government
for Years, a New Report Shows, CNN BUS., https://www.cnn.com/2021/06/30/tech/government-
facial-recognition-use-gao-report/index.html [https://perma.cc/DFQ8-SQ5M] (last updated June
30, 2021, 1:15 PM).
180. NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software, NAT’L
INST. FOR STANDARDS & TECH. (Dec. 19, 2019), https://www.nist.gov/news-events/news/2019/
12/nist-study-evaluates-effects-race-age-sex-face-recognition-software [https://perma.cc/LR2U-
WBL2]; see also Brian Fung, Facial Recognition Systems Show Rampant Racial Bias, Government
Study Finds, CNN BUS., https://www.cnn.com/2019/12/19/tech/facial-recognition-study-racial-
bias/index.html [https://perma.cc/6L8R-9Z9C] (last updated Dec. 19, 2019, 6:37 PM). In noting
these important concerns about bias with facial recognition algorithms, we do not overlook the
limitations and biases involved in relying on human recognition and recall. See generally SEAN M.
LANE & KATE A. HOUSTON, UNDERSTANDING EYEWITNESS MEMORY: THEORY AND
APPLICATIONS (2021).
181. OFQUAL, AWARDING GCSE, AS, A LEVEL, ADVANCED EXTENSION AWARDS AND
EXTENDED PROJECT QUALIFICATIONS IN SUMMER 2020: INTERIM REPORT 11–12 (2020) (U.K.),
https://www.gov.uk/government/publications/awarding-gcse-as-a-levels-in-summer-2020-int
erim-report [https://perma.cc/T9LN-VM5M].
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
182. See Richard Adams, Sally Weale & Caelainn Barr, A-Level Results: Almost 40% of
Teacher Assessments in England Downgraded, GUARDIAN (Aug. 13, 2020, 6:39 AM),
https://www.theguardian.com/education/2020/aug/13/almost-40-of-english-students-have-a-level-
results-downgraded [https://perma.cc/D6Q6-5FBZ].
183. See Adam Satariano, British Grading Debacle Shows Pitfalls of Automating
Government, N.Y. TIMES (Aug. 20, 2020), https://www.nytimes.com/2020/08/20/world/europe/uk-
england-grading-algorithm.html [https://perma.cc/K2X8-XLUV].
184. In 2017, the city of Boston sought to reconfigure its school bus schedules using a digital
algorithm aimed at improving the “sleep health of high school kids, getting elementary school
kids home before dark, supporting kids with special needs, lowering costs, and increasing equity
overall.” Joi Ito, What the Boston School Bus Schedule Can Teach Us About AI, WIRED (Nov. 5,
2018, 8:00 AM), https://www.wired.com/story/joi-ito-ai-and-bus-routes [https://perma.cc/H83T-
FYDH]. But its initial plan was met with resistance by many angry parents who preferred the
status quo—suggesting that better communication and engagement may have helped. E.g., id.;
Ellen P. Goodman, Smart Algorithmic Change Requires a Collaborative Political Process, REG.
REV. (Feb. 12, 2019), https://www.theregreview.org/2019/02/12/goodman-smart-algorithmic-
change-requires-collaborative-political-process [https://perma.cc/V36K-QY8M]. Although the
city dropped its most ambitious plan to change bus schedules, it nevertheless used digital
algorithms to optimize school bus routes, which reduced vehicle emissions and fuel costs
considerably. Sean Fleming, This US City Put an Algorithm in Charge of Its School Bus Routes
and Saved $5 Million, WORLD ECON. F. (Aug. 22, 2019), https://www.weforum.org/agenda/2019/
08/this-us-city-put-an-algorithm-in-charge-of-its-school-bus-routes-and-saved-5-million [https://
perma.cc/PL6W-L98E].
185. Louise Amoore, Why ‘Ditch the Algorithm’ Is the Future of Political Protest, GUARDIAN
(Aug. 19, 2020, 6:47 AM), https://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-
algorithm-generation-students-a-levels-politics [https://perma.cc/LNX5-2GZR].
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
that calls for purely human judgment, due to the limitations noted in
Part I, they can also fail when making human judgments about the
design and use of digital algorithms. The key is for humans to engage
in smart decision-making about when and how to deploy digital
algorithms.
186. The overall need for care in choosing to digitize a governmental process is basically the
same as is needed when making any decision to redesign a process. See Cary Coglianese, Process
Choice, 5 REGUL. & GOVERNANCE 250, 255–57 (2011) (noting that, just as substantive choices
about regulations need analysis, so too do choices about process). See generally CARY
COGLIANESE, ORG. FOR ECON. COOP. & DEV., MEASURING REGULATORY PERFORMANCE:
EVALUATING THE IMPACT OF REGULATION AND REGULATORY POLICY (2012) [hereinafter
COGLIANESE, MEASURING REGULATORY PERFORMANCE], https://www.oecd.org/gov/ regulatory-
policy/1_coglianese%20web.pdf [https://perma.cc/7VC7-4B9E] (showing how regulatory procedures
and processes can be evaluated empirically).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
187. In still other cases, systems which involve humans working in collaboration with digital
systems may well prove the most optimal. For presentation purposes, this article has been framed
around a binary choice between human algorithms and digital algorithms; however, the best
option in some cases might involve a combination of the two. Cf. Tim Wu, Will Artificial
Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems, 119 COLUM. L. REV. 2001,
2026–28 (2019). The decision framework and factors presented throughout Part III could in
principle be applied just as well to any option involving a hybrid system of human–machine
collaboration.
188. That is, digital algorithms “can be far less imperfect than noisy and often-biased human
judgment.” KAHNEMAN, SIBONY & SUNSTEIN, supra note 153, at 337.
189. Mathews v. Eldridge, 424 U.S. 319 (1976).
190. See id. at 333–35.
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
193. Even with respect to other issues, agencies do not always have enough information to
monetize all benefits and costs. See, e.g., Michigan v. EPA, 576 U.S. 743, 759 (2015) (stating that
an agency is not required to “conduct a formal cost-benefit analysis in which each advantage and
disadvantage is assigned a monetary value”); Amy Sinden, Formality and Informality in Cost-
Benefit Analysis, 2015 UTAH L. REV. 93, 101.
194. See generally ARTHUR M. OKUN, EQUALITY AND EFFICIENCY: THE BIG TRADEOFF
(1975) (addressing the tension between equality and efficiency).
195. Sometimes this is referred to as multigoal analysis. DAVID L. WEIMER & AIDAN R.
VINING, POLICY ANALYSIS: CONCEPTS AND PRACTICE 355 (6th ed. 2017). For a brief
introduction to methods of analyzing outcomes using criteria that cannot be converted into a
common metric, see id. at 352–58. A branch within the field of operations research provides a
suite of sophisticated mathematical tools that can be used in conducting multicriteria decision
analysis. For perspectives on this analytic approach, see generally RALPH L. KEENEY & HOWARD
RAIFFA, DECISIONS WITH MULTIPLE OBJECTIVES: PREFERENCES AND VALUE TRADEOFFS
(1993) and MURAT KÖKSALAN, JYRKI WALLENIUS & STANLEY ZIONTS, MULTIPLE CRITERIA
DECISION MAKING: FROM EARLY HISTORY TO THE 21ST CENTURY (2011).
196. See, e.g., WEIMER & VINING, supra note 195, at 352–53 (discussing qualitative benefit-
cost analysis); Sinden, supra note 193, at 107–29 (discussing differences between hard and soft, or
formal and informal, benefit-cost analysis).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
197. In drawing upon such a qualitative scalar rating, it is important for decision-makers to
use caution. Rather than relying simply on a summing up of the ratings, a decision-maker needs
to consider the evidence fully and engage in sustained reasoning about each option. Not every
criterion will deserve to be treated equally, as would occur with a summation of ratings.
Furthermore, the uniform distance between different points on a scale likely will not reflect fully
the true relevant differences between the strengths and weaknesses of different options.
198. With respect to choosing whether to use machine learning, a multicriteria framework
can be used at different stages of the development process when different information is available.
That is, it can be used at the outset in deciding whether an agency should even invest in the
development of a machine-learning based system, as well as later, whenever such system has been
developed, in deciding whether to deploy the system. It can provide a basis for subsequent
evaluation of the system in operation and making decisions about future modification of the
system.
199. The latter use is a hypothetical discussed at length in Coglianese & Lehr, Transparency,
supra note 1, at 10, 17, 52–53.
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
200. For discussion on which this section draws, see generally CARY COGLIANESE, A
FRAMEWORK FOR GOVERNMENTAL USE OF MACHINE LEARNING 66–72 (2020), https://www.
acus.gov/sites/default/files/documents/Coglianese%20ACUS%20Final%20Report.pdf [https://perma.
cc/CW3H-WUFP] and Cary Coglianese & Alicia Lai, Assessing Automated Administration, in
OXFORD HANDBOOK FOR AI GOVERNANCE (Justin Bullock et al. eds., forthcoming 2022). For
a related discussion of issues for government agencies to consider when seeking to use AI tools
successfully, see Desouza, supra note 125, at 11–18.
201. Coglianese, Optimizing, supra note 139, at 10; see also Shelly Hagan, More Robots Mean
120 Million Workers Need To Be Retrained, BLOOMBERG (Sept. 6, 2019, 12:00 AM),
https://www.bloomberg.com/news/articles/2019-09-06/robots-displacing-jobs-means-120-million-
workers-need-retraining [https://perma.cc/ALN6-7XMC] (noting that AI advancements will
require upskilling workers amid an existing talent shortage). Furthermore, the process of public
sector hiring can be slow. Eric Katz, The Federal Government Has Gotten Slower at Hiring New
Employees for Five Consecutive Years, GOV’T EXEC. (Mar. 1, 2018), https://www.govexec.com/
management/2018/03/federal-government-has-gotten-slower-hiring-new-employees-five-consecutive-
years/146348 [https://perma.cc/AAD6-RQ54].
202. There are some positive indications. Under the Foundations for Evidence-Based
Policymaking Act, signed into law in 2019, agencies must appoint “Chief Data Officers” and
“Evaluation Officers” to understand and promote data, laying the stage for AI. Foundations for
Evidence-Based Policymaking Act of 2018, Pub. L. No. 115-435, §§ 313, 3520(c), 132 Stat. 5529,
5531, 5541–42 (2019).
203. Cf. Ian Sample, Google Boss Warns of ‘Forgotten Century’ with Email and Photos at
Risk, GUARDIAN (Feb. 13, 2015, 4:16 AM), https://www.theguardian.com/technology/2015/feb/
13/google-boss-warns-forgotten-century-email-photos-vint-cerf [https://perma.cc/6GZN-YK45]
(describing the risks posed by obsolescence of digital storage technologies).
204. See, e.g., OFF. OF THE INSPECTOR GEN., U.S. OFF. OF PERS. MGMT., SEMIANNUAL
REPORT TO CONGRESS 8 (2019), https://www.opm.gov/news/reports-publications/semi-annual-
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
209. For helpful discussion of various options, see Mayson, supra note 8, at 2233–35.
210. In human decision-making systems, the existence of such tradeoffs may be obscured and
their resolution effectuated through what Cass Sunstein has called “incompletely theorized
agreements.” Cass R. Sunstein, Incompletely Theorized Agreements, 108 HARV. L. REV. 1733,
1735 (1995). But machine-learning algorithms demand more than such incomplete agreements,
such as about what may be “reasonable.” They need the value choices reflected in the algorithm’s
objective to be stated with mathematical precision.
211. By presidential order, executive agencies are instructed that, when issuing regulations,
they “shall, to the extent feasible, specify performance objectives, rather than specifying the
behavior or manner of compliance that regulated entities must adopt.” Exec. Order No. 12,866,
§ l(b)(8), 58 Fed. Reg. 51,735, 51,736 (Oct. 4, 1993).
212. Cary Coglianese, The Limits of Performance-Based Regulation, 50 U. MICH. J.L.
REFORM 525, 562 (2017).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
217. 8 C.F.R. § 1208.13(b) (2021); see also 8 U.S.C. § 1101(a)(42) (specifying asylum
qualification based on “a well-founded fear of persecution on account of race, religion,
nationality, membership in a particular social group, or political opinion”).
218. Blanco De Belbruno v. Ashcroft, 362 F.3d 272, 284 (4th Cir. 2004).
219. INS v. Cardoza-Fonseca, 480 U.S. 421, 448 (1987) (“[A] term like ‘well founded fear’. . .
can only be given concrete meaning through a process of case-by-case adjudication.”).
220. See Christopher Rigano, Using Artificial Intelligence To Address Criminal Justice Needs,
NAT’L INST. OF JUST. (Oct. 8, 2018), https://nij.ojp.gov/topics/articles/using-artificial-intelligence-
address-criminal-justice-needs [https://perma.cc/PD2L-WTHD].
221. Cf. Gary Marcus & Ernest Davis, A.I. Is Harder Than You Think, N.Y. TIMES (May 18,
2018), https://www.nytimes.com/2018/05/18/opinion/artificial-intelligence-challenges.html
[https://perma.cc/9AGR-UHVV] (“No matter how much data you have and how many patterns
you discern, your data will never match the creativity of human beings or the fluidity of the real
world.”). For an earlier philosophical discussion, see HUBERT L. DREYFUS, WHAT COMPUTERS
STILL CAN’T DO: A CRITIQUE OF ARTIFICIAL REASON (MIT Press rev. ed. 1992) (1972).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
222. M.L. CUMMINGS, WOMEN CORP. DIRS., THE SURPRISING BRITTLENESS OF AI 2 (2020),
https://www.womencorporatedirectors.org/WCD/News/JAN-Feb2020/Reality%20Light.pdf
[https://perma.cc/LU5C-R5TH].
223. ROBIN HOGARTH, ON COCONUTS IN FOGGY MINE-FIELDS: AN APPROACH TO STUDYING
FUTURE-CHOICE DECISIONS 6 (2008), https://www.researchgate.net/publication/228499901_On_C
oconuts_in_Foggy_Mine-Fields_An_approach_to_studying_future-choice_decisions [https://perma.
cc/FDS7-WLC5].
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
the status quo and that it will be worth taking further steps to assess
the possibility of deploying an algorithmic system.
Performance in Improving Outcomes. The next step, after
determining if the necessary preconditions for a machine-learning
option can be satisfied, is to assess a digital system’s likely performance
in improving outcomes. This is the ultimate test for machine learning:
how it performs compared to the status quo.
As Part I makes clear, the human status quo leaves plenty of room
for improvement. Whether a machine-learning system is realistically
expected to fare better will constitute a centerpiece of any multicriteria
analysis aimed at deciding whether to adopt machine learning. The
precise definition of “better” will need to be informed by each specific
task, whether that task involves forecasting the weather, identifying tax
fraud, or determining eligibility for licenses or benefits. Although the
specific relevant criteria will vary across different uses, it is possible to
identify three general types of impacts that should be considered in
determining whether machine learning improves outcomes:
• Goal Performance. Current systems operated by humans
have goals that they are meant to achieve. The first set of
outcome-oriented criteria for deciding whether to use
machine learning should be guided by those prevailing
goals. The relevant factors can be captured by a series of
straightforward questions: Would machine learning prove
more accurate in achieving an administrative agency’s
goals? Would it operate more quickly? Would it cost less?
Would it yield a greater degree of consistency? These
questions can be asked from the standpoint of the current
statutory purpose or operational goal of a human-driven
system. Decision-makers can also step back and use the
possibility of automation to consider current goals afresh.
They will do well to consider more precisely the underlying
problem that the system is supposed to solve and seek to
measure the degree to which the digital algorithm helps
solve it. The key will be to determine whether—and by how
much—machine learning will help an administrative
agency do its job better.224 As indicated in Part II.B, in
important instances digital algorithms can indeed achieve
improvements in the attainment of basic administrative and
224. For a discussion of regulatory outcomes and their evaluation, see COGLIANESE,
MEASURING REGULATORY PERFORMANCE, supra note 186, at 9–13.
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
policy goals. This does not mean, of course, that they will
always result in improvements.
• Impacts on Those Directly Affected. The ways that machine
learning might help an agency do its job better are only one
way to consider machine learning’s impacts. Unless already
fully captured in the agency’s own performance goals, it is
also important to assess the effects of machine learning on
those businesses or individuals who would be directly
affected by a specific machine-learning system, such as the
applicants for government benefits or licenses. How would
a machine-learning system treat them? Would their data be
kept private? Would some directly affected parties gain or
suffer disproportionately to others? Would those directly
affected by a machine-learning system feel like that system
has served them fairly? Recall that algorithmic systems do
not need to be perfect or completely problem free—just
better than the status quo. If the status quo for some tasks
is dependent on human personnel to answer telephones and
thus keeps members of the public waiting on hold for hours
before they speak to a person who can assist them, a
machine-learning chatbot could be much better, relatively
speaking. Indeed, the private firm eBay uses a fully
automated customer dispute resolution system that works
so well that customers who experience disputes are
reportedly more inclined to do business with eBay again
than are those who never experience a dispute in the first
place.225
• Impacts on Broader Public. Unless already factored into
the agency’s own performance goals, administrators
contemplating the introduction of a digital algorithmic
system should include broader societal effects in any
multicriteria analysis. How would machine learning affect
those who might not be directly interacting with or be
affected by the system? Will the errors that remain with
machine learning prove to have broader societal
consequences? Few such spillover effects might exist, for
example, with an automated mailing sorting system. But
they would certainly be present with a digital system that
225. See BENJAMIN H. BARTON & STEPHANOS BIBAS, REBOOTING JUSTICE: MORE
TECHNOLOGY, FEWER LAWYERS, AND THE FUTURE OF LAW 113 (2017); ETHAN KATSH &
ORNA RABINOVICH-EINY, DIGITAL JUSTICE: TECHNOLOGY AND THE INTERNET OF DISPUTES
34–35 (2017).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
226. Cf. Adoption of Recommendations, 82 Fed. Reg. 61,728, 61,738 (Dec. 29, 2017)
(explaining the importance of agencies trying to “learn whether outcomes are improved in those
time periods or jurisdictions with the regulatory obligation”).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
227. Professors David Engstrom and Daniel Ho call this approach “prospective
benchmarking.” David Freeman Engstrom & Daniel E. Ho, Algorithmic Accountability in the
Administrative State, 37 YALE J. ON REGUL. 800, 849–53 (2020).
228. Decision-makers would do well in this regard to consider the guidance offered by public
administration scholars about the need for ensuring legitimacy and accountability in
governmental uses of AI. See generally Madalina Busuioc, Accountable Artificial Intelligence:
Holding Algorithms to Account, 81 PUB. ADMIN. REV. 825 (2020) (providing recommendations
on how to address AI’s accountability issues); Matthew M. Young, Justin B. Bullock & Jesse D.
Lecy, Artificial Discretion as a Tool of Governance: A Framework for Understanding the Impact
of Artificial Intelligence on Public Administration, 2 PERSPS. ON PUB. MGMT. & GOVERNANCE
301 (2019) (“provid[ing] a framework for defining, characterizing, and evaluating artificial
discretion as a technology that both augments and competes with traditional bureaucratic
discretion”).
229. For a review of the litigation to date over governmental authorities’ use of mathematical
algorithms, see Coglianese & Ben Dor, supra note 8, at 827–36.
230. See Coglianese & Lehr, Transparency, supra note 1, at 30 (“[N]efarious governmental
action can take place entirely independently of any application of machine learning.”).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
231. Id.; Coglianese & Lehr, Regulating by Robot, supra note 1, at 1202; Steven M. Appel &
Cary Coglianese, Algorithmic Administrative Justice, in THE OXFORD HANDBOOK OF
ADMINISTRATIVE JUSTICE (Marc Hertogh et al. eds., 2021). Some of this work forms a basis for
the discussion contained in this Part.
232. Coglianese & Lehr, Regulating by Robot, supra note 1, at 1215; Coglianese & Lehr,
Transparency, supra note 1, at 42, 55.
233. Cary Coglianese & Kathryn Hefter, From Negative to Positive Algorithm Rights, 30 WM.
& MARY BILL RTS. J. (forthcoming 2022).
234. See id.; Appel & Coglianese, supra note 231, at 15.
235. Private sector firms increasingly recognize the importance of full, robust vetting of new
forms of AI. Los Alamos National Laboratory, How Artificial Intelligence and Machine Learning
Transform the Human Condition, YOUTUBE, at 31:26 (Aug. 2, 2021), https://www.youtube.com/
watch?v=HyuqxdfC4oE [https://perma.cc/K53U-5Q8R] (address by Andrew Moore, Director of
Google Cloud AI). For guidance on auditing digital algorithms, see Joshua A. Kroll, Joanna
Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson & Harlan Yu,
Accountable Algorithms, 165 U. PA. L. REV. 633, 660–61 (2017); MILES BRUNDAGE ET AL.,
TOWARD TRUSTWORTHY AI DEVELOPMENT: MECHANISMS FOR SUPPORTING VERIFIABLE
CLAIMS 24–25 (2020), https://arxiv.org/pdf/2004.07213.pdf [https://perma.cc/T86W-LEGX];
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
SUPREME AUDIT INSTS. OF FIN., GER., THE NETH., NOR., & THE UK, AUDITING MACHINE
LEARNING ALGORITHMS: A WHITE PAPER FOR PUBLIC AUDITORS 15–17 (2020),
https://www.auditingalgorithms.net/auditing-ml.pdf [https://perma.cc/8WHB-VWZ2].
236. For a general overview of regulatory principles, proposals, and other initiatives related
to AI in the United States, see Christopher S. Yoo & Alicia Lai, Regulation of Algorithmic Tools
in the United States, 13 J.L. & ECON. REG. 7, 7–9 (2020). In addition to the guidelines noted in the
paragraph, the National Institute of Standards and Technology within the U.S. Department of
Commerce has been charged with developing a voluntary artificial intelligence risk management
framework, which it embarked on developing in 2021. Artificial Intelligence Risk Management
Framework, 86 Fed. Reg. 40,810, 40,810 (July 29, 2021). The head of the White House Office of
Science and Technology Policy has indicated a further desire to develop its own set of principles
for governmental use of AI. Eric Lander & Alondra Nelson, Americans Need a Bill of Rights for
an AI-Powered World, WIRED (Oct. 8, 2021, 8:00 AM), https://www.wired.com/story/opinion-bill-
of-rights-artificial-intelligence [https://perma.cc/4FRF-S2GY].
237. ORG. FOR ECON. COOP. & DEV., RECOMMENDATION OF THE COUNCIL ON ARTIFICIAL
INTELLIGENCE (2019), https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
[https://perma.cc/5PH8-C8YA].
238. Agency Use of Artificial Intelligence, 86 Fed. Reg. 6,616, 6,616 (Jan. 22, 2021).
239. Exec. Order No. 13,960, 85 Fed. Reg. 78,939, 78,939 (Dec. 3, 2020).
240. U.S. GOV’T ACCOUNTABILITY OFF., ARTIFICIAL INTELLIGENCE: AN ACCOUNTABILITY
FRAMEWORK FOR FEDERAL AGENCIES AND OTHER ENTITIES (2021).
241. See Michael Sant’Ambrogio & Glen Staszewski, Democratizing Rule Development, 98
WASH. U. L. REV. 793, 832–33 (2021).
242. Public participation can offer agencies a chief advantage that economist Roger Porter
has attributed to a “multiple advocacy” model of presidential decision-making: namely, the full
presentation of competing viewpoints. ROGER B. PORTER, PRESIDENTIAL DECISION MAKING:
THE ECONOMIC POLICY BOARD 241–47 (1982). Participation can also reinforce the “active open-
mindedness” that is important for successful decision-making in any organizational setting.
PHILIP E. TETLOCK & DAN GARDNER, SUPERFORECASTING: THE ART AND SCIENCE OF
PREDICTION 126–27, 207–08 (2015).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
also learn about a fuller range of values and interests that could be
affected by any digital algorithms they design and implement.
Government officials can benefit overall from tapping into the
distributed knowledge held by experts, activists, and others in the
broader public at various stages of project management, from planning
to ongoing use and continued improvement.243
Finally, when agencies contract out with third-party vendors for the
development and operation of algorithmic decision-making systems,
they should consider the need to access and disclose sufficient
information about the algorithm, the underlying data, and the validation
results to satisfy subsequent expectations for transparency.244 In
establishing contract terms and conditions with external contractors,
administrators can insert provisions to ensure that contractors will
provide sufficient information to the agency and the public and will
adhere to basic principles of responsible action in the development of
algorithmic tools.245 Furthermore, given that human frailties can affect
all human decisions—including the decision of how and whether to
procure digital services—administrators should remain vigilant and
avoid being unduly persuaded by contractors’ sales pitches.246
In recommending careful and robust planning, public
participation, and procurement practices, we do not mean to suggest
that agency officials must give equal rigor to these implementation
strategies in every case.247 To the contrary, just as agencies are expected
243. See Cary Coglianese, Heather Kilmartin & Evan Mendelson, Transparency and Public
Participation in the Federal Rulemaking Process, 77 GEO. WASH. L. REV. 924, 932 (2009).
244. See, e.g., Coglianese & Lehr, Transparency, supra note 1, at 21; Cary Coglianese & Erik
Lampmann, Contracting for Algorithmic Accountability, 6 ADMIN. L. REV. ACCORD 175, 186
(2021); David S. Rubenstein, Acquiring Ethical AI, 73 FLA. L. REV. 747, 799–803 (2021).
Consideration should also be paid to privacy protections for any data shared between contractors
and to the use of any privacy-enhancing technology. See generally KAITLIN ASROW & SPIRO
SAMONAS, FED. RSRV. BANK OF S.F., PRIVACY ENHANCING TECHNOLOGIES: CATEGORIES,
USE CASES, AND CONSIDERATIONS (2021), https://www.frbsf.org/economic-research/events/20
21/august/bard-harstad-climate-economics-seminar/files/Privacy-Enhancing-Technologies-Cat
egories-Use-Cases-and-Considerations.pdf [https://perma.cc/JJ7B-8Q5L] (discussing various forms
of privacy-enhancing technologies).
245. Lavi M. Ben Dor & Cary Coglianese, Procurement as AI Governance, 2 IEEE
TRANSACTIONS ON TECH. & SOC’Y 192, 194 (2021).
246. See Omer Dekel & Amos Schurr, Cognitive Biases in Government Procurement – An
Experimental Study, 10 REV. L. & ECON. 169, 170–71 (2014) (describing systemic biases
influencing competitive bidding in governmental contracts).
247. What Porter has to say about structuring White House decision-making applies in any
governmental context, including agency decision-making about the use of digital tools: “Different
circumstances require different organizational responses. An executive should weigh carefully the
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
249. Again, the notion of risk here is that to the governmental entity rather than to society
or to affected individuals. For all the reasons articulated in Parts I and II, the risks of error and of
adverse consequences to affected individuals or society may well be markedly greater when
machine learning only provides an input into otherwise flawed human judgment.
250. The European Union has proposed making similar distinctions between high-risk and
low-risk uses of AI and then imposing greater regulatory obligations on those organizations that
develop high-risk forms of AI. See generally Proposal for a Regulation of the European Parliament
and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial
Intelligence Act) and Amending Certain Union Legislative Acts, COM (2021) 206 final (Apr. 21,
2021) (outlining the proposal). What the European Union proposal contemplates by “risk”
approximates what we discuss here as the “stakes” associated with any particular use case. But by
using the term “stakes,” we self-consciously contemplate the possibility that a shift to an AI-based
system in high-stakes circumstances might lower the probability of error and thus reduce the level
of risk (understood as probability multiplied by the consequences) to those individuals or entities
affected by the AI system. The use of AI could perhaps even convert otherwise high-risk
circumstances to ones of low-risk for affected individuals or entities. Nevertheless, for the
government agency, the existence of high stakes in the form of substantial potential consequences
to the affected individuals could still present the agency with greater organizational risk of conflict
and controversy as it contemplates a shift even to such an efficacious AI system.
251. See Jessica Mulholland, Chatbots Debut in North Carolina, Allow IT Personnel To Focus
on Strategic Tasks, GOV’T TECH. (Oct. 11, 2016), https://www.govtech.com/Chatbots-Debut-in-
North-Carolina-Allow-IT-Personnel-to-Focus-on-Strategic-Tasks.html [https://perma.cc/8FJA-CXB6].
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
Low Stakes
High Stakes
252. Although this figure uses discrete cells for ease of illustration, both axes should be conceived
as continua: from low stakes to high stakes, and from low levels of determination to high levels.
253. State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
254. See id. at 753.
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
CONCLUSION
Administrative agencies face choices about whether and when to
rely on automated decision-making systems. The increasing use of
machine-learning algorithms to drive automation in business,
medicine, transportation, and other facets of society portends a future
of increased use of machine-learning tools by government. Indeed,
already government agencies have been developing and relying upon
digital algorithms to assist with enforcement, benefits administration,
and other important government tasks.
Moving toward governance aided by digital algorithms naturally
gives rise to concerns about how these new digital tools will affect the
effectiveness, fairness, and openness of governmental decision-
making. This Article shows that concerns about machine-learning
systems should be kept in perspective. The status quo that relies on
human algorithms is itself far from perfect. If the responsible use of
machine learning can usher in a government that—at least for certain
uses—achieves better results than the status quo at constant or even
255. Id.
256. See generally Coglianese & Hefter, supra note 233 (discussing both positive and negative
consequences of AI decision-making and contemplating a shift in social acceptance of algorithmic
tools by governmental entities).
COGLIANESE & LAI IN PRINTER FINAL (DO NOT DELETE) 2/24/2022 11:36 AM
fewer costs, then both governmental officials and the public would do
well to support such use.
The challenge for agencies will be to decide when and how to use
digital algorithms to reap their advantages. Agency officials should
take appropriate caution when making decisions about digital
algorithms—especially because these decisions can be affected by the
same foibles and limitations that can affect any human decision.
Officials should consider whether a potential use of a digital algorithm
will satisfy the general preconditions for the success of such algorithms,
and then they should seek to test whether such algorithms will indeed
deliver improved outcomes. With sound planning and risk
management, government agencies can make the most of what digital
algorithms can deliver by way of improvements over existing human
algorithms.