Manyika GettingAIRight 2022
Manyika GettingAIRight 2022
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
American Academy of Arts & Sciences and The MIT Press are collaborating with JSTOR to digitize,
preserve and extend access to Daedalus
T
his dialogue is from an early scene in the 2014 film Ex Machina, in which
Nathan has invited Caleb to determine whether Nathan has succeeded
in creating artificial intelligence.1 The achievement of powerful artificial
general intelligence has long held a grip on our imagination not only for its excit-
ing as well as worrisome possibilities, but also for its suggestion of a new, unchart-
ed era for humanity. In opening his 2021 BBC Reith Lectures, titled “Living with
Artificial Intelligence,” Stuart Russell states that “the eventual emergence of gen-
eral-purpose artificial intelligence [will be] the biggest event in human history.”2
Over the last decade, a rapid succession of impressive results has brought wid-
er public attention to the possibilities of powerful artificial intelligence. In ma-
chine vision, researchers demonstrated systems that could recognize objects as
well as, if not better than, humans in some situations. Then came the games.
Complex games of strategy have long been associated with superior intelligence,
and so when AI systems beat the best human players at chess, Atari games, Go,
shogi, StarCraft, and Dota, the world took notice. It was not just that AIs beat hu-
mans (although that was astounding when it first happened), but the escalating
progression of how they did it: initially by learning from expert human play, then
from self-play, then by teaching themselves the principles of the games from the
ground up, eventually yielding single systems that could learn, play, and win at
This volume of Dædalus is indeed the first since 1988 to be devoted to artificial
intelligence. This volume does not rehash the same debates; much else has hap-
pened since, mostly as a result of the success of the machine learning approach
that was being rediscovered and reimagined, as discussed in the 1988 volume. This
issue aims to capture where we are in AI’s development and how its growing uses
impact society. The themes and concerns herein are colored by my own involve-
ment with AI. Besides the television, films, and books that I grew up with, my in-
terest in AI began in earnest in 1989 when, as an undergraduate at the University of
Zimbabwe, I undertook a research project to model and train a neural network.9
I went on to do research on AI and robotics at Oxford. Over the years, I have been
involved with researchers in academia and labs developing AI systems, studying
AI’s impact on the economy, tracking AI’s progress, and working with others in
business, policy, and labor grappling with its opportunities and challenges for
society.10
The authors of the twenty-five essays in this volume range from AI scientists
and technologists at the frontier of many of AI’s developments to social scientists
at the forefront of analyzing AI’s impacts on society. The volume is organized into
ten sections. Half of the sections are focused on AI’s development, the other half
on its intersections with various aspects of society. In addition to the diversity in
their topics, expertise, and vantage points, the authors bring a range of views on
the possibilities, benefits, and concerns for society. I am grateful to the authors for
accepting my invitation to write these essays.
B
efore proceeding further, it may be useful to say what we mean by artifi-
cial intelligence. The headlines and increasing pervasiveness of AI and its
associated technologies have led to some conflation and confusion about
what exactly counts as AI. This has not been helped by the current trend–among
researchers in science and the humanities, startups, established companies, and
even governments–to associate anything involving not only machine learning,
but data science, algorithms, robots, and automation of all sorts with AI. This
could simply reflect the hype now associated with AI, but it could also be an ac-
knowledgment of the success of the current wave of AI and its related techniques
and their wide-ranging use and usefulness. I think both are true; but it has not al-
ways been like this. In the period now referred to as the AI winter, during which
progress in AI did not live up to expectations, there was a reticence to associate
most of what we now call AI with AI.
Two types of definitions are typically given for AI. The first are those that sug-
gest that it is the ability to artificially do what intelligent beings, usually human,
can do. For example, artificial intelligence is:
the ability of a digital computer or computer-controlled robot to perform tasks com-
monly associated with intelligent beings.11
This type of definition also suggests the pursuit of goals, which could be given
to the system, self-generated, or learned.13 That both types of definitions are em-
ployed throughout this volume yields insights of its own.
These definitional distinctions notwithstanding, the term AI, much to the cha-
grin of some in the field, has come to be what cognitive and computer scientist
Marvin Minsky called a “suitcase word.”14 It is packed variously, depending on
who you ask, with approaches for achieving intelligence, including those based on
logic, probability, information and control theory, neural networks, and various
other learning, inference, and planning methods, as well as their instantiations in
software, hardware, and, in the case of embodied intelligence, systems that can
perceive, move, and manipulate objects.
T
hree questions cut through the discussions in this volume: 1) Where are
we in AI’s development? 2) What opportunities and challenges does AI
pose for society? 3) How much about AI is really about us?
Not only did this approach benefit from championing by its advocates and plen-
tiful funding, it came with the suggested weight of a long intellectual tradition–
exemplified by Descartes, Boole, Frege, Russell, and Church, among others–that
sought to manipulate symbols and to formalize and axiomatize knowledge and
reasoning. It was only in the late 1980s that interest began to grow again in the sec-
ond vision, largely through the work of David Rumelhart, Geoffrey Hinton, James
McClelland, and others. The history of these two visions and the associated philo-
sophical ideas are discussed in Hubert Dreyfus and Stuart Dreyfus’s 1988 Dædalus
essay “Making a Mind Versus Modeling the Brain: Artificial Intelligence Back at
a Branchpoint.”20 Since then, the approach to intelligence based on learning, the
use of statistical methods, back-propagation, and training (supervised and unsu-
pervised) has come to characterize the current dominant approach.
Kevin Scott, in his essay “I Do Not Think It Means What You Think It Means:
Artificial Intelligence, Cognitive Work & Scale,” reminds us of the work of Ray
Solomonoff and others linking information and probability theory with the idea
of machines that can not only learn, but compress and potentially generalize what
they learn, and the emerging realization of this in the systems now being built and
those to come. The success of the machine learning approach has benefited from
the boon in the availability of data to train the algorithms thanks to the growth in
the use of the Internet and other applications and services. In research, the data
explosion has been the result of new scientific instruments and observation plat-
forms and data-generating breakthroughs, for example, in astronomy and in ge-
nomics. Equally important has been the co-evolution of the software and hard-
ware used, especially chip architectures better suited to the parallel computations
involved in data- and compute-intensive neural networks and other machine
learning approaches, as Dean discusses.
Several authors delve into progress in key subfields of AI.21 In their essay, “Search-
ing for Computer Vision North Stars,” Fei-Fei Li and Ranjay Krishna chart devel-
opments in machine vision and the creation of standard data sets such as ImageNet
that could be used for benchmarking performance. In their respective essays “Hu-
man Language Understanding & Reasoning” and “The Curious Case of Common-
sense Intelligence,” Chris Manning and Yejin Choi discuss different eras and ideas
in natural language processing, including the recent emergence of large language
models comprising hundreds of billions of parameters and that use transformer
architectures and self-supervised learning on vast amounts of data.22 The result-
ing pretrained models are impressive in their capacity to take natural language
prompts for which they have not been trained specifically and generate human-like
outputs, not only in natural language, but also images, software code, and more,
as Mira Murati discusses and illustrates in “Language & Coding Creativity.” Some
have started to refer to these large language models as foundational models in that
once they are trained, they are adaptable to a wide range of tasks and outputs.23 But
despite their unexpected performance, these large language models are still early
in their development and have many shortcomings and limitations that are high-
lighted in this volume and elsewhere, including by some of their developers.24
In “The Machines from Our Future,” Daniela Rus discusses the progress in
robotic systems, including advances in the underlying technologies, as well as in
their integrated design that enables them to operate in the physical world. She
highlights the limitations in the “industrial” approaches used thus far and sug-
gests new ways of conceptualizing robots that draw on insights from biological
systems. In robotics, as in AI more generally, there has always been a tension as to
whether to copy or simply draw inspiration from how humans and other biologi-
cal organisms achieve intelligent behavior. Elsewhere, AI researcher Demis Hassa-
bis and colleagues have explored how neuroscience and AI learn from and inspire
each other, although so far more in one direction than the other, as Alexis Baria
and Keith Cross have suggested.25
Despite the success of the current approaches to AI, there are still many short-
comings and limitations, as well as conceptually hard problems in AI.26 It is useful
to distinguish on one hand problematic shortcomings, such as when AI does not
perform as intended or safely, or produces biased or toxic outputs that can lead to
harm, or when it impinges on privacy, or generates false information about the
world, or when it has characteristics such as lack of explainability, all of which
can lead to a loss of public trust. These shortcomings have rightly captured the at-
tention of the wider public and regulatory bodies, as well as researchers, among
whom there is an increased focus on technical AI and ethics issues.27 In recent
years, there has been a flurry of efforts to develop principles and approaches to re-
sponsible AI, as well as bodies involving industry and academia, such as the Part-
nership on AI, that aim to share best practices.28 Another important shortcoming
has been the significant lack of diversity–especially with respect to gender and
race–in the people researching and developing AI in both industry and academia,
as has been well documented in recent years.29 This is an important gap in its own
right, but also with respect to the characteristics of the resulting AI and, conse-
quently, in its intersections with society more broadly.
On the other hand, there are limitations and hard problems associated with
the things that AI is not yet capable of that, if solved, could lead to more power-
ful, more capable, or more general AI. In their Turing Lecture, deep learning pio-
neers Yoshua Bengio, Yann LeCun, and Geoffrey Hinton took stock of where deep
learning stands and highlighted its current limitations, such as the difficulties
with out-of-distribution generalization.30 In the case of natural language process-
ing, Manning and Choi highlight the hard challenges in reasoning and common-
sense understanding, despite the surprising performance of large language mod-
els. Elsewhere, computational linguists Emily Bender and Alexander Koller have
challenged the notion that large language models do anything resembling under-
preferences. Though the question of whether, how, and when AGI will be achieved
is a matter for debate, most agree that its achievement would have profound im-
plications–beneficial and worrisome–for humanity, as is often depicted in pop-
ular books38 and films such as 2001: A Space Odyssey through Terminator and The
Matrix to Ex Machina and Her. Whether it is imminent or not, there is growing
agreement among many at the frontier of AI research that we should prepare for
the possibility of powerful AGI with respect to safety and control, alignment and
compatibility with humans, its governance and use, and the possibility that mul-
tiple varieties of AGI could emerge, and that we should factor these considerations
into how we approach the development of AGI.
Most of the investment, research and development, and commercial activi-
ty in AI today is of the narrow AI variety and in its numerous forms: what Nigel
Shadbolt terms the speciation of AI. This is hardly surprising given the scope for
useful and commercial applications and the potential for economic gains in mul-
tiple sectors of the economy.39 However, a few organizations have made the de-
velopment of AGI their primary goal. Among the most well-known of these are
DeepMind and OpenAI, each of which has demonstrated results of increasing
generality, though still a long way from AGI.
will take in this regard, and resulting outcomes for work, will depend on the in-
centives for researchers, companies, and governments.42
Still, a concern remains that the conclusion that more jobs will be created
than lost draws too much from patterns of the past and does not look far enough
into the future and at what AI will be capable of. The arguments for why AI could
break from past patterns of technology-driven change include: first, that unlike
in the past, technological change is happening faster and labor markets (includ-
ing workers) and societal systems’ ability to adapt are slow and mismatched; and
second, that, until now, automation has mostly mechanized physical and routine
tasks, but that going forward, AI will be taking on more cognitive and nonroutine
tasks, creative tasks, tasks based on tacit knowledge, and, if early examples are
any indication, even socioempathic tasks are not out of the question.43 In other
words, “There are now in the world machines that think, that learn and that cre-
ate. Moreover, their ability to do these things is going to increase rapidly until–in
a visible future–the range of problems they can handle will be coextensive with
the range to which the human mind has been applied.” This was Herbert Simon
and Allen Newell in 1957.44
Acknowledging that this time could be different usually elicits two responses:
First, that new labor markets will emerge in which people will value things done
by other humans for their own sake, even when machines may be capable of doing
these things as well as or even better than humans. The other response is that AI
will create so much wealth and material abundance, all without the need for hu-
man labor, and the scale of abundance will be sufficient to provide for everyone’s
needs. And when that happens, humanity will face the challenge that Keynes once
framed: “For the first time since his creation man will be faced with his real, his
permanent problem–how to use his freedom from pressing economic cares, how
to occupy the leisure, which science and compound interest will have won for him,
to live wisely and agreeably and well.”45 However, most researchers believe that
we are not close to a future in which the majority of humanity will face Keynes’s
challenge, and that until then, there are other AI- and automation-related effects
that must be addressed in the labor markets now and in the near future, such as in-
equality and other wage effects, education, skilling, and how humans work along-
side increasingly capable machines–issues that Laura Tyson and John Zysman,
Michael Spence, and Erik Brynjolfsson discuss in this volume.
Jobs are not the only aspect of the economy impacted by AI. Russell provides a
directional estimate of the potentially huge economic bounty from artificial gen-
eral intelligence, once fully realized: a global GDP of $750 trillion, or ten times
today’s global GDP. But even before we get to fully realized general-purpose AI,
the commercial opportunities for companies and, for countries, the potential pro-
ductivity gains and economic growth as well as economic competitiveness from
narrow AI and its related technologies are more than sufficient to ensure intense
As the use of AI has grown to encompass not only consumer applications and
services, but also those in health care, financial services, public services, and com-
merce generally, it has in many instances improved effectiveness and decision
quality and enabled much-needed cost and performance optimization. At the same
time, in some cases, the use of algorithms has led to issues of bias and fairness, of-
ten the result of bias in the training data and the societal systems through which
such data are collected.52 Sonia Katyal uses examples from facial recognition, po-
licing, and sentencing to argue in “Democracy & Distrust in an Era of Artificial
Intelligence” that, when there is an absence of representation and participation,
AI-powered systems carry the same risks and potential for distrust as political sys-
tems. In “Distrust of Artificial Intelligence: Sources & Responses from Comput-
er Science & Law,” Cynthia Dwork and Martha Minow highlight the absence of
ground truth and what happens when utility for users and commercial interests
are at odds with considerations of privacy and the risks of societal harms.53 In light
of these concerns, as well as the beneficial possibilities of AI, Mariano-Florentino
Cuéllar, a former California Supreme Court Justice, and Aziz Huq frame how we
might achieve the title of their essay: artificially intelligent regulation.
It is easy to see how governments and organizations in their desire to observe,
analyze, and optimize everything would be tempted to use AI to create increas-
ingly powerful “seeing rooms.” In “Socializing Data,” Diane Coyle discusses the
history and perils of seeing rooms, even when well intentioned, and the problems
that arise when markets are the primary mechanism for how AI uses social data.
For governments, the opportunity to use AI to improve the delivery and effective-
ness of public services is also hard to ignore. In her essay “Rethinking AI for Good
Governance,” Helen Margetts asks what a public sector AI would look like. She
draws on public sector examples from different countries to highlight key chal-
lenges, notably those related to issues like resource allocation, that are more “nor-
matively loaded” in the public sector than they are for firms. She concludes by
exploring how and in which areas governments can make the most ambitious and
societally beneficial use of AI.
better done by machines? How much of being human needs the mystery of not
knowing how it works, or relies on our inability to mimic it or replicate it artifi-
cially? What happens when this changes? To what extent do our human ability–
bounded conceptions of X (where X could be intelligence, creativity, empathy, re-
lations, and so on) limit the possibility of other forms of X that may complement
or serve humanity better? To what extent must we reexamine our socioeconomic
systems and institutions, our social infrastructure, what lies at the heart of our so-
cial policies, at our notions of justice, representation, and inclusion, and face up to
what they really are (and have been) and what they will need to be in the age of AI?
Their shortcomings notwithstanding, the emergence of large language models
and their ability to generate human-like outputs provides a “laboratory” of sorts,
as Tobias Rees calls it, to explore questions about us in an era of increasingly ca-
pable machines. We may have finally arrived at what Dennett suggests at the end
of his 1988 essay, that “AI has not yet solved any of our ancient riddles . . . but it
has provided us with new ways of disciplining and extending philosophical imag-
ination that we have only just begun to exploit.”54 Murati explores how humans
could relate to and work alongside machines when machines can generate out-
puts approaching human-like creativity. She illustrates this with examples gen-
erated by GPT-3, OpenAI’s large language model. The possibilities she describes
echo what Scott suggests: that we humans may have to rethink our relation to
work and other creative activities.
Blaise Agüera y Arcas explores the titular question of his essay “Do Large Lan-
guage Models Understand Us?” through a series of provocations interspersed
with outputs from LaMDA, Google’s large language model. He asks whether we
are gatekeeping or constantly moving the goalposts when it comes to notions
such as intelligence or understanding, even consciousness, in order to retain these
for ourselves. Pamela McCorduck, in her still-relevant history of the field, Ma-
chines Who Think, first published in 1979, put it thus: “It’s part of the history of the
field of artificial intelligence that every time somebody figured out how to make
a computer do something–play good checkers, solve simple but relatively infor-
mal problems–there was a chorus of critics to say, ‘that’s not thinking.’”55 As to
what machines are actually doing or not actually doing when they appear to be
thinking, one could ask whether whatever they are doing is different from what
humans do in any way other than how it is being done. In “Non-Human Words:
On GPT-3 as a Philosophical Laboratory,” while engaging in current debates about
the nature of these models, Rees also discusses how conceptions of the human
have been intertwined with language in different historical eras and considers the
possibility of a new era in which language is separated from humans.
In “Signs Taken for Wonders: AI, Art & the Matter of Race,” Michele Elam
illustrates how, throughout history, socially transformative technologies have
played a formalizing and codifying role in our conceptions of what constitutes
humanity and who the “us” is. In how they are developed, used, and monetized,
and by whom, she argues that technologies like AI have the effect of universaliz-
ing particular conceptions of what it is to be human and to progress, often at the
exclusion of other ways of being human and of progressing and knowing, espe-
cially those associated with Black, Latinx, and Indigenous communities and with
feminist, queer, disability, and decolonial perspectives; further highlighting the
need for diversity among those involved in AI’s development. Elsewhere, Tim-
nit Gebru has clearly illustrated how, like other technologies with the potential to
benefit society, AI can also worsen systematic discrimination of already margin-
alized groups.56 In another example of AI as formalizer to ill-effect, Blaise Agüera
y Arcas, Margaret Mitchell, and Alexander Todorov examine the use of machine
learning to correlate physical characteristics with nonphysical traits, not unlike
nineteenth- and twentieth-century physiognomy, and point out the harmful cir-
cular logic of essentialism that can result when AI is used as a detector of traits.57
Progress in AI not only raises the stakes on ethical issues associated with its
application, it also helps bring to light issues already extant in society. Many have
shown how algorithms and automated decision-making can not only perpetuate
but also formalize and amplify existing societal inequalities, as well as create new
inequalities.58 In addition, the challenge to remove bias or code for fairness may
also create the opportunity for society to examine in a new light what it means by
“fair.”59 Here it is worth recalling Dennett being unimpressed by Putnam’s indict-
ment of AI, that “AI has utterly failed, over a quarter century, to solve problems
that philosophy has utterly failed to solve over two millennia.”60 Furthermore,
examining the role of algorithms and automated decision-making and the data
needed to inform algorithms may shed light on what actually underlies society’s
goals and policies in the first place, issues that have begun to receive attention in
the literature of algorithms, fairness, and social welfare.61 In “Toward a Theory
of Justice for Artificial Intelligence,” Iason Gabriel, drawing on Rawls’s theory of
justice, explores the intersection of AI and distributive justice by considering the
role that sociotechnical systems play. He examines issues including basic liberties
and equality of opportunity to suggest that considerations of distributive justice
may now need to grapple with the particularities of AI as a technological system
and that could lead to some novel consequences.
And as AI becomes more powerful, a looming question becomes how to align AI
with humans with respect to safety and control, goals and preferences, even values.
The question of AI and control is as old as the field itself; Turing himself raised it,
as Russell reminds us. Some researchers believe that concerns about these sorts of
risks are overblown given the nature of AI, while others believe we are a long way
away from existential control risks but that research must begin to consider ap-
proaches to the control issue and factor it into how we develop more powerful AI
systems.62 Russell proposes an approach to alignment and human compatibility
that capitalizes on uncertainty in goals and human preferences, and makes use of
inverse reinforcement learning as a way for machines to learn human preferences.
Elsewhere, Gabriel has discussed the range of possibilities as to what we mean by
alignment with AI, with each possibility presenting its own complexities.63 But in
Gabriel, as in Russell, there are considerable normative challenges involved, along
with complications due to the plasticity of human preferences.
In “Artificial Intelligence, Humanistic Ethics,” John Tasioulas argues that de-
signing AI that aligns with human preferences is one thing, but it does not obviate
the need to determine what those human preferences should be in the first place.
He challenges the tendency to default to preference utilitarianism and its maximi-
zation by AI developers, as well as by economic and governmental actors (who of-
ten use wealth maximization and GDP as proxies), which leads to market mecha-
nisms dominating solutions at the expense of nonmarket values and mechanisms,
echoing some of Coyle’s concerns. Here again it seems that the mirror provided by
more capable AI highlights, and with higher stakes, the unfinished (perhaps never
to be finished) business of humanistic ethics, not unlike how AI may be pushing
us to clarify fairness and serving notice that trolley problems are no longer just the
stuff of thought experiments, since we are building autonomous systems that may
have to make such choices.
Throughout the history of AI, we have asked: how good is it now? This ques-
tion has been asked about every application from playing chess or Go, to know-
ing things, performing surgery, driving a car, writing a novel, creating art, inde-
pendently making mathematical conjectures or scientific discoveries, or simply
having a good bedside manner. In asking the question, it may be useful also to ask:
compared to what? With an eye toward implications for society, one might com-
pare AI with the humans best at the respective activity. There remain plenty of
activities in which the “best” humans perform better than AI–as they likely will
for the foreseeable future–and society is well served by these humans perform-
ing these activities. One might also compare with other samplings of humanity,
such as the average person employed in or permitted to conduct that activity, or
a randomly selected human. And here, as AI becomes more capable, is where the
societal implications get more complicated. For example, do we raise permission
standards for humans performing safety-critical activities to keep up with ma-
chine capabilities? Similarly, what determines when AI is good enough? A third
comparison might be with respect to how co-extensive the range of AI capabili-
ties become with those of humans–what Simon and Newell, as mentioned earli-
er, thought would eventually come to pass. How good AI systems become in this
respect would likely herald the beginning of a new era for us and for society of the
sort discussed previously. But perhaps the most important comparison is with re-
spect to what we choose to use AI for and what we need AI to be capable of in order
to benefit society. It would seem that in any such comparisons, along with how we
design, develop, and deploy AI, the societal implications are not foregone conclu-
sions, but choices that are up to us.
I
s all this worth it? If not, a logical response might be to stop everything, stop
further development and deployment of AI, put the curses back in Pandora’s
box. This hardly seems realistic, given the huge economic and strategic stakes
and the intense competition that has been unleashed between countries and be-
tween companies, not to mention the usefulness of AI to its users and the tanta-
lizing beneficial possibilities, some already here, for society. My response to the
question is a conditional yes.
At an AI conference a few years ago, I participated on a panel to which the host,
Stuart Russell, posed a thought experiment. I forget the exact formulation, or even
how I responded, but I have come to express it as follows:
It’s the year 2050, AI has turned out to be hugely beneficial to society and generally
acknowledged as such. What happened?
right misuses, many more unintended consequences, and destabilizing race con-
ditions among the various competitors. A fourth set of challenges concerns us:
how we co-evolve our societal systems and institutions and negotiate the com-
plexities of how to be human in an age of increasingly powerful AI.
Readers of this volume will undoubtedly develop their own perspectives on
what we collectively must get right if AI is to be a net positive for humanity. While
such lists will necessarily evolve as our uses and societal experience with AI grow
and as AI itself becomes more powerful, the work on them must not wait.
Returning to the question, is this worth it? My affirmative answer is condi-
tioned on confronting and getting right these hard issues. At present, it seems that
the majority of human ingenuity, effort, and financial and other resources are dis-
proportionately focused on commercial applications and the economic potential
of AI, and not enough on the other issues that are also critical for AI to be a net ben-
efit to humanity given the stakes. We can change that.
author’s note
I am grateful to the American Academy for the opportunity to conceive this Dæda-
lus volume on AI & Society and to bring together diverse perspectives on AI across
a range of topics. On a theme as broad as this, there are without doubt many more
topics and views that are missing; for that I take responsibility.
I would like to thank the Fellows of All Souls College, Oxford, where I have been a
Visiting Fellow during the editing of this Dædalus volume. I would also like to thank
my colleagues at the McKinsey Global Institute, the AI Index, and the 100-Year
Study of AI at Stanford, as well as my fellow members on the National Academies
of Sciences, Engineering, and Medicine Committee on Responsible Computing Re-
search and Its Applications, for our many discussions as well as our work togeth-
er that informed the shape of this volume. I am grateful for the conversations with
the authors in this volume and with others, including Hans-Peter Brondmo, Gil-
lian Hadfield, Demis Hassabis, Mary Kay Henry, Reid Hoffman, Eric Horvitz, Mar-
garet Levi, Eric Salobir, Myron Scholes, Julie Su, Paul Tighe, and Ngaire Woods. I
am grateful for valuable comments and suggestions on this introduction from Jack
Clark, Erik Brynjolfsson, Blaise Agüera y Arcas, Julian Manyika, Sarah Manyika,
Maithra Raghu, and Stuart Russell, but they should not be held responsible for any
errors or opinions herein.
This volume could not have come together without the generous collaboration of
the Academy’s editorial team of Phyllis Bendell, Director of Publications and Man-
aging Editor of Dædalus, who brought her experience as guide and editor, and en-
thusiasm from the very beginning to the completion of this effort, and Heather
Struntz and Peter Walton, who were collaborative and expert copyeditors for all
the essays in this volume.
endnotes
1 The Turing Test was conceived by Alan Turing in 1950 as a way of testing whether a com-
puter’s responses are indistinguishable from those of a human. Though it is often dis-
cussed in popular culture as a test for artificial intelligence, many researchers do not
consider it a test of artificial intelligence; Turing himself called it “the imitation game.”
Alan M. Turing, “Computing Machinery and Intelligence,” Mind, October 1950.
2 “The Reith Lectures: Living with Artificial Intelligence,” BBC, https://www.bbc.co.uk/
programmes/m001216k.
3 Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, et al., “MuZero: Mastering
Go, Chess, Shogi and Atari without Rules,” DeepMind, December 23, 2020; and Chris-
topher Berner, Greg Brockman, Brooke Chan, et al., “Dota 2 with Large Scale Deep Re-
inforcement Learning,” arXiv (2019), https://arxiv.org/abs/1912.06680.
4 The Department of Energy’s report on AI for science provides an extensive review of both
the current state-of-the-art uses of AI in various branches of science as well as the grand
challenges for AI in each. See Rick Stevens, Valerie Taylor, Jeff Nichols, et al., AI for
Science: Report on the Department of Energy (DOE) on Artificial Intelligence (AI) for Science (Oak
Ridge, Tenn.: U.S. Department of Energy Office of Scientific and Technical Information,
2020), https://doi.org/ 10.2172/1604756. See also the Royal Society and the Alan Turing
Institute, “The AI Revolution in Scientific Research” (London: The Royal Society, 2019).
5 See Kathryn Tunyasuvunakool, Jonas Adler, Zachary Wu, et al., “Highly Accurate Protein
Structure Prediction for the Human Proteome,” Nature 596 (7873) (2021); Janet Thorn-
ton and colleagues discuss the contributions of AlphaFold to the life sciences, including
its use in predicting the structure of some of the proteins associated with SARS-CoV-2,
the virus that causes COVID-19. See Janet Thornton, Roman A. Laskowski, and Neera
Borkakoti, “AlphaFold Heralds a Data-Driven Revolution in Biology and Medicine,”
Nature Medicine 27 (10) (2021).
6 “AI Set to Exceed Human Brain Power,” CNN, August 9, 2006, http://edition.cnn.com/
2006/TECH/science/07/24/ai.bostrom/.
7 See Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” Pro-
Publica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments
-in-criminal-sentencing; and Joy Buolamwini and Timnit Gebru, “Gender Shades: In-
tersectional Accuracy Disparities in Commercial Gender Classification,” in Proceedings
of the 1st Conference on Fairness, Accountability and Transparency (New York: Association for
Computing Machinery, 2018).
8 See Hilary Putnam, “Much Ado About Not Very Much,” Dædalus 117 (1) (Winter 1988):
279, https://www.amacad.org/sites/default/files/daedalus/downloads/Daedalus_Wi98_
Artificial-Intelligence.pdf. In the same volume, see also Daniel Dennett’s essay “When
Philosophers Encounter Artificial Intelligence,” in which he provides a robust response
to Putman while also making observations about AI and philosophy that, with the ben-
efit of hindsight, remain insightful today, even as the field has progressed.
9 Robert K. Appiah, Jean H. Daigle, James M. Manyika, and Themuso Makhurane, “Model-
ling and Training of Artificial Neural Networks,” African Journal of Science and Technology
Series B, Science 6 (1) (1992).
10 Founded by Eric Horvitz, the 100-Year Study of AI that I have been involved in publish-
es a report every five years; its most recent report takes stock of progress in AI as well
as concerns as it is more widely deployed in society. See Michael L. Littman, Ifeoma
Ajunwa, Guy Berger, et al., Gathering Strength, Gathering Storms: The One Hundred Year Study
on Artificial Intelligence (AI100) 2021 Study Panel Report (Stanford, Calif.: Stanford Universi-
ty, 2021). Separately, at the AI Index, we provide an annual view of developments in AI.
See Artificial Intelligence Index, Stanford University Human-Centered Artificial Intel-
ligence, https://aiindex.stanford.edu/.
11 B. J. Copeland, “Artificial Intelligence,” Britannica, https://www.britannica.com/
technology/artificial-intelligence (last edited December 14, 2021).
12 David Poole, Alan Mackworth, and Randy Goebel, Computational Intelligence: A Logical
Approach (New York: Oxford University Press, 1998).
13 The goal-orientation in this second type of definition is considered by some also as limiting,
hence variations such as Stuart Russell and Peter Norvig’s, that focus on perceiving and
acting. See Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th ed.
(Hoboken, N.J.: Pearson, 2021). See also Shane Legg and Marcus Hutter, “A Collection
of Definitions of Intelligence,” arXiv (2007), https://arxiv.org/abs/0706.3639.
14 See Marvin Minsky, The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and
the Future of the Human Mind (New York: Simon & Schuster, 2007).
15 See Stephen Cave and Kanta Dihal, “Ancient Dreams of Intelligent Machines: 3,000 Years
of Robots,” Nature 559 (7715) (2018).
16 Dennett, “When Philosophers Encounter Artificial Intelligence.”
17 John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon,
“A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,”
August 31, 1955, http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf.
18 Many of the pioneers of the current AI spring and their views are featured in Martin
Ford, Architects of Intelligence: The Truth about AI from the People Building It (Birmingham,
United Kingdom: Packt Publishing, 2018).
19 Yoshua Bengio, Yann Lecun, and Geoffrey Hinton, “Deep Learning for AI,” Communi-
cations of the ACM 64 (7) (2021). Reinforcement learning adds the notion of learning
through sequential experiences that involve state transitions and making use of rein-
forcing rewards. See Richard S. Sutton and Andrew G. Barto, Reinforcement Learning: An
Introduction (Cambridge, Mass.: MIT Press, 2018).
20 Hubert L. Dreyfus and Stuart E. Dreyfus, “Making a Mind Versus Modeling the Brain:
Artificial Intelligence Back at a Branchpoint,” Dædalus 117 (1) (Winter 1988): 15–44,
https://www.amacad.org/sites/default/files/daedalus/downloads/Daedalus_Wi98_
Artificial-Intelligence.pdf.
21 For a view on trends in performance versus benchmarks in various AI subfields, see
Chapter 2 in Human-Centered Artificial Intelligence, Artificial Intelligence Index Report 2022
(Stanford, Calif.: Stanford University, 2022), https://aiindex.stanford.edu/wp-content/
uploads/2022/03/2022-AI-Index-Report_Master.pdf.
22 At the time of developing this volume (2020–2021), the most well-known large language
models included OpenAI’s GPT-3, Google’s LaMDA, Microsoft’s MT-NLG, and Deep-
Mind’s Gopher. These models use transformer architectures first described in Ashish
Vaswani, Noam Shazeer, Niki Parmar, et al., “Attention Is All You Need,” arXiv (2017),
https://arxiv.org/abs/1706.03762.
23 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al., “On the Opportunities and Risks
of Foundation Models,” arXiv (2021), https://arxiv.org/abs/2108.07258.
24 Ibid. See also Laura Weidinger, John Mellor, Maribeth Rauh, et al., “Ethical and Social
Risks of Harm from Language Models,” arXiv (2021), https://arxiv.org/abs/2112.04359.
On toxicity, see Samuel Gehman, Suchin Gururangan, Maarten Sap, et al., “RealToxicity-
Prompts: Evaluating Neural Toxic Degeneration in Language Models,” in Findings of the
Association for Computational Linguistics: EMNLP 2020 (Stroudsburg, Pa.: Association for
Computational Linguistics, 2020), 3356–3369; and Albert Xu, Eshaan Pathak, Eric Wal-
lace, et al., “Detoxifying Language Models Risks Marginalizing Minority Voices,” arXiv
(2014), https://arxiv.org/abs/2104.06390.
25 Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvi-
nick, “Neuroscience-Inspired Artificial Intelligence,” Neuron 95 (2) (2017); and Alexis
T. Baria and Keith Cross, “The Brain Is a Computer Is a Brain: Neuroscience’s Inter-
nal Debate and the Social Significance of the Computational Metaphor,” arXiv (2021),
https://arxiv.org/abs/2107.14042.
26 See Littman, Gathering Strength, Gathering Storms.
27 For an overview of trends in AI technical and ethics issues as well as AI regulation and
policy, see Chapters 3 and 6, respectively, in Human-Centered Artificial Intelligence, Ar-
tificial Intelligence Index Report 2022. See also Mateusz Szczepański, Michał Choraś, Marek
Pawlicki, and Aleksandra Pawlicka, “The Methods and Approaches of Explainable Ar-
tificial Intelligence,” in Computational Science–ICCS 2021, ed. Maciej Paszynski, Dieter
Kranzlmüller, Valeria V. Krzhizhanovskaya, et al. (Cham, Switzerland: Springer, 2021).
See also Cynthia Dwork and Aaron Roth, “The Algorithmic Foundations of Differential
Privacy,” Foundations and Trends in Theoretical Computer Science 9 (3–4) (2014).
28 For an overview of the types of efforts as well as three case studies (Microsoft, OpenAI,
and OECD’s observatory), see Jessica Cussins Newman, Decision Points in AI Governance:
Three Case Studies Explore Efforts to Operationalize AI Principles (Berkeley: Center for Long-
Term Cybersecurity, UC Berkeley, 2020).
29 See Chapter 6, “Diversity in AI,” in Human-Centered Artificial Intelligence, Artificial In-
telligence Index Report 2021 (Stanford, Calif.: Stanford University, 2021), https://aiindex
.stanford.edu/wp-content/uploads/2021/11/2021-AI-Index-Report_Master.pdf. See also
Sarah Myers West, Meredith Whittaker, and Kate Crawford, “Discriminating Systems:
Gender, Race and Power in AI” (New York: AI Now Institute, 2019).
NBER Working Paper 24196 (Cambridge, Mass.: National Bureau of Economic Research,
2018); David Autor, David Mindell, and Elisabeth Reynolds, The Work of the Future:
Building Better Jobs in an Age of Intelligent Machines (Cambridge, Mass.: MIT Work of the
Future, 2020); and Erik Brynjolfsson, “The Problem Is Wages, Not Jobs,” in Redesigning
AI: Work, Democracy, and Justice in the Age of Automation, ed. Daron Acemoglu (Cambridge,
Mass.: MIT Press, 2021).
42 See Daron Acemoglu and Pascual Restrepo, “The Wrong Kind of AI? Artificial Intel-
ligence and the Future of Labor Demand,” NBER Working Paper 25682 (Cambridge,
Mass.: National Bureau of Economic Research, 2019); and Bryan Wilder, Eric Horvitz,
and Ece Kamar, “Learning to Complement Humans,” arXiv (2020), https://arxiv.org/
abs/2005.00582.
43 Susskind provides a broad survey of many of the arguments that AI has changed every-
thing with respect to jobs. See Daniel Susskind, A World Without Work: Technology, Auto-
mation, and How We Should Respond (New York: Metropolitan Books, 2020).
44 From their 1957 lecture in Herbert A. Simon and Allen Newell, “Heuristic Problem Solv-
ing: The Next Advance Operations Research,” Operations Research 6 (1) (1958).
45 John Maynard Keynes, “Economic Possibilities for Our Grandchildren,” in Essays in Per-
suasion (New York: Harcourt Brace, 1932), 358–373.
46 See our most recent annual AI Index report, Human-Centered Artificial Intelligence, Arti-
ficial Intelligence Index Report 2022. See also Daniel Castro, Michael McLaughlin, and Eline
Chivot, “Who Is Winning the AI Race: China, the EU or the United States?” Center for
Data Innovation, August 19, 2019; and Daitian Li, Tony W. Tong, and Yangao Xiao, “Is
China Emerging as the Global Leader in AI?” Harvard Business Review, February 18, 2021.
47 Tania Babina, Anastassia Fedyk, Alex Xi He, and James Hodson, “Artificial Intelligence,
Firm Growth, and Product Innovation” (2021), https://papers.ssrn.com/sol3/papers
.cfm?abstract_id=3651052.
48 See Amanda Askell, Miles Brundage, and Gillian Hadfield, “The Role of Cooperation in
Responsible AI Development,” arXiv (2019), https://arxiv.org/abs/1907.04534.
49 See Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher, The Age of AI: And Our
Human Future (Boston: Little, Brown and Company, 2021).
50 Issues that we explored in a Council on Foreign Relations Taskforce on Innovation and
National Security. See James Manyika and William H. McRaven, Innovation and National
Security: Keeping Our Edge (New York: Council on Foreign Relations, 2019).
51 For an assessment of the potential contributions of AI to many of the global develop-
ment challenges, as well as gaps and risks, see Michael Chui, Martin Harryson, James
Manyika, et al., “Notes from the AI Frontier: Applying AI for Social Good” (New York:
McKinsey Global Institute, 2018). See also Ricardo Vinuesa, Hossein Azizpour, Iolanda
Leita, et al., “The Role of Artificial Intelligence in Achieving the Sustainable Develop-
ment Goals,” Nature Communications 11 (1) (2022).
52 Timnit Gebru, Jamie Morgenstern, Briana Vecchione, et al., “Datasheets for Datasets,”
arXiv (2021), https://arxiv.org/abs/1803.09010.
53 See also Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at
the New Frontier of Power (New York: Public Affairs, 2019).
54 See Dennett, “When Philosophers Encounter Artificial Intelligence.”
55 Pamela McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects of
Artificial Intelligence, 2nd ed. (Abingdon-on-Thames, United Kingdom: Routledge, 2004).
56 Timnit Gebru, “Race and Gender,” in The Oxford Handbook of Ethics of AI, ed. Markus D.
Dubber, Frank Pasquale, and Sunit Das (Oxford: Oxford University Press, 2020).
57 Blaise Agüera y Arcas, Margaret Mitchell, and Alexander Todorov, “Physiognomy in the
Age of AI” (forthcoming).
58 See Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias: There’s
Software Used across the Country to Predict Future Criminals. And It’s Biased against
Blacks,” ProPublica, May 23, 2016; Virginia Eubanks, Automating Inequality: How High-
Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin’s Press, 2018); and
Maximilian Kasy and Rediet Abebe, “Fairness, Equality, and Power in Algorithmic
Decision-Making,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and
Transparency (New York: Association for Computing Machinery, 2021).
59 See Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian, “On the
(im)Possibility of Fairness,” arXiv (2016), https://arxiv.org/abs/1609.07236; and Arvind
Narayanan, “Translation Tutorial: 21 Fairness Definitions and their Politics,” in Pro-
ceedings of the 2018 Conference on Fairness, Accountability, and Transparency (New York: Asso-
ciation for Computing Machinery, 2018).
60 Dennett, “When Philosophers Encounter Artificial Intelligence.”
61 See Sendhil Mullainathan, “Algorithmic Fairness and the Social Welfare Function,” in
Proceedings of the 2018 ACM Conference on Economics and Computation (New York: Associa-
tion for Computing Machinery, 2018). See also Lily Hu and Yiling Chen, “Fair Classifi-
cation and Social Welfare,” in Proceedings of the 2020 Conference on Fairness, Accountability,
and Transparency (New York: Association for Computing Machinery, 2020), 535–545;
and Hoda Heidari, Claudio Ferrari, Krishna Gummadi, and Andreas Krause, “Fair-
ness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making,”
Advances in Neural Information Processing Systems 31 (2018): 1265–1276.
62 See discussion in Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford
University Press, 2014); and Stuart Russell, Human Compatible: Artificial Intelligence and the
Problem of Control (New York: Viking, 2019).
63 Iason Gabriel, “Artificial Intelligence, Values, and Alignment,” Minds and Machines 30 (3)
(2020).
64 At an AI conference organized by the Future of Life Institute, we generated a list of prior-
ities for robust and beneficial AI. See Stuart Russell, Daniel Dewey, and Max Tegmark,
“Research Priorities for Robust and Beneficial Artificial Intelligence,” AI Magazine,
Winter 2015. See also the issues raised in Littman, Gathering Strength, Gathering Storms.
65 Such a working list in response to the 2050 thought experiment can be found at “AI2050’s
Hard Problems Working List,” https://drive.google.com/file/d/1IoSEnQSzftuW9-Rik
M760JSuP-Heauq_/view (accessed February 17, 2022).