0% found this document useful (0 votes)
19 views44 pages

Hcai Module 3

The document discusses the goals of AI research, emphasizing two main objectives: understanding human intelligence through science and creating beneficial innovations. It highlights the evolution of AI from early symbolic manipulation to modern machine learning techniques, while addressing both successes and failures in AI applications. The text advocates for a combined approach that integrates human-centered design with AI technologies to enhance user control and trust in AI systems.

Uploaded by

shreekd2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views44 pages

Hcai Module 3

The document discusses the goals of AI research, emphasizing two main objectives: understanding human intelligence through science and creating beneficial innovations. It highlights the evolution of AI from early symbolic manipulation to modern machine learning techniques, while addressing both successes and failures in AI applications. The text advocates for a combined approach that integrates human-centered design with AI technologies to enhance user control and trust in AI systems.

Uploaded by

shreekd2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

CHAPT ER 11

Introduction: What Are the Goals


of AI Research?
When a person delegates some task to an agent, be it artificial or human, the result of
that task is still the responsibility of the delegating person, who is the one who will be
liable if things do not go as expected. . . . an interactive system that is autonomous and
adaptable is hard to verify and predict, which in turn can lead to unexpected activity.
Virginia Dignum, “Responsibility and Artificial Intelligence,” in The Oxford Handbook of Ethics
of AI (2020), edited by Markus D. Dubber, Frank Pasquale, and Sunit Das

G
oals for AI science and engineering research were proposed at least
sixty years ago, when early conferences brought together those who
believed in pursuing Alan Turing’s question “Can Machines Think?”1
A simple description is that AI science research is devoted to getting computers
to do what humans do, matching or exceeding human perceptual, cognitive,
and motor abilities.
A starting point is satisfying the Turing Test, which gives observers a key-
board and display to have a typed conversation. If the observers cannot tell if
they are connected to a human or a machine, then the machine has satisfied
the Turing Test. Many variants have been developed over the years, such as cre-
ating computer-generated images which are indistinguishable from photos of
people. Another variant is to make a robot that speaks, moves, and looks like a
human. Stuart Russell, a University of California-Berkeley computer scientist,
energetically embraced the dual goals of human emulation science and soci-
etally beneficial innovations.2 He wrote that AI is “one of the principal avenues
for understanding how human intelligence works but also a golden opportunity
88 PART 3: DESIGN METAPHORS

to improve the human condition—to make a far better civilization.” Russell sees
“problems arising from imbuing machines with intelligence.”3
Science research on perceptual, cognitive, and motor abilities includes
pattern recognition (images, speech, facial, signal, etc.), natural language pro-
cessing, and translation from one natural language to another. Other research
challenges have been to make accurate predictions, get robots to perform as
well as a person, and have applications recognize human emotions so the ap-
plication can respond appropriately. Yet another goal is to play popular games,
such as checkers, chess, Go, or poker, as well as or better than human players.
As the early science research evolved, useful innovations became possible,
but the science research that emphasized symbolic manipulation gave way to
statistical approaches, based on machine learning and deep learning, which
trained neural networks from existing databases. Neural network strategies
were refined in later implementations of generative adversarial networks
(GANs), convolutional neural networks (CNN), recurrent neural networks
(RNN), inverse reinforcement learning (IRL), and newer foundation models
and their variants.
The visionary aspirations of AI researchers have led to a range of inspiring
projects. Proponents claim that the emergence of AI has been an historical turn-
ing point for humanity showing great promise. Critics pointed out that many
projects have failed, as is common with ambitious new research directions, but
other projects have led to widely used applications, such as optical character
recognition, speech recognition, and natural language translation. While crit-
ics say that AI innovations remain imperfect, nevertheless many applications
are impressive and commercially successful.
Bold aspirations can be helpful, but another line of criticism is that the
AI science methods have failed, giving way to more traditional engineering
solutions, which have succeeded. For example, IBM’s famed Deep Blue chess-
playing program, which defeated world champion Garry Kasparov in 1997, is
claimed as an AI success. However, the IBM researcher who led the Deep Blue
team, Feng-hsiung Hsu, has made an explicit statement that they did not use AI
methods.4 Their brute force hardware solution used specialized chips to rapidly
explore moves that each player could make, up to twenty moves ahead.
As another example, in many business applications AI-guided knowledge-
based expert systems have failed, but carefully engineered rule-based systems
with human-curated rule sets have succeeded.5 For example, many companies
maintain complex pricing and discount policies that vary by region, product,
and purchase volume, with favored customers getting lower prices. Keeping
CHAPTER 11: GOALS OF AI RESEARCH? 89

track of these so that all sales personnel quote the same price is important in
maintaining trust.
Recent criticisms have focused on the brittleness of deep learning methods,
which may work well in laboratory experiments but fail in real world appli-
cations.6 New York University professors Gary Marcus and Ernest Davis have
reported on the high expectations of early AI researchers, like Marvin Minsky,
John McCarthy, and Herb Simon, which have not been realized.7 Herb Simon’s
memorable 1965 prediction was that “machines will be capable, within twenty
years, or doing any work a man [or woman] can do.” Marcus and Davis de-
scribe the many failures of AI systems, which make mistakes in interpreting
photos, become racist chatbots, fail at healthcare recommendations, or crash
self-driving cars into fire trucks. However, they remain optimistic about the
future of AI, which they believe will be brightened by the development of
common-sense reasoning. They call for a reboot, based on more and better AI.
Science writer Mitchell Waldrop says, “there is no denying that deep learning
is an incredibly powerful tool,” but he also describes failures that highlight “just
how far AI still has to go before it remotely approaches human capabilities.”8
Waldrop closes with possible solutions by improving deep learning strategies,
expanding the training data sets, and taking a positive view of the challenges
that lie ahead.
Even after sixty years, AI is in its early days. I want AI to succeed, and see
the way forward is to adopt HCAI design processes that involve stakeholders in
design discussions and iterative testing for user interfaces and control panels.
The second component is to make products with more transparency and human
control over the algorithms. I envision explainable user interfaces, audit trails to
analyze failures and near misses, and independent oversight to guide decisions
(Part 4). In short, a new synthesis of human-centered design thinking with the
best of AI methods will do much to deliver meaningful technologies that benefit
people.
These debates about AI research dramatically influence government re-
search funding, major commercial projects, academic research and teaching,
and public impressions. This chapter simplifies the many goals of AI researchers
into these two, science and innovation, and then describes four pairs of design
possibilities which could be fruitfully combined (Figure 11.1). Combined de-
signs may result in more automation in some contexts, more human control
in others. Combined designs will also have more nuanced choices over the fea-
tures that can be reliably done by a computer and those that should remain
under human control.
90 PART 3: DESIGN METAPHORS

Two Grand Goals of Al Research

Science Innovation
Goal Goal

Intelligent Agents Supertools


Thinking Machine, Cognitive Actor, Extend Abilities, Empower Users,
Artificial Intelligence, Knowledgeable Enhance Human Performance

Combined Designs
Teammates Tele-bots
Co-active Collaborator, Colleague, Steerable Instrument, Powerful Prosthetic,
Helpful Partner, Smart Co-worker Boost Human Perceptual & Motor Skills

Assured Autonomy Control Centers


Independent, Self-directed, Human Control & Oversight, Situation
Goal-setting, Self-monitored Awareness, Preventive Actions

Social Robots Active Appliances


Anthropomorphic, Android, Consumer-oriented, Wide Use, Low Cost,
Bionic, Bio-inspired, Humanoid Comprehensible Control Panels

Fig 11.1 Terminology and design metaphors for the science goal and innovation goal,
with the possibility of combined designs that take the useful features from each goal.

The four pairs of design possibilities are a guide to what might work in differ-
ent contexts or they can suggest combined designs that lead to reliable, safe, and
trustworthy systems, especially for consequential and life-critical applications.
Design excellence can bring widespread benefits for users and society, such as in
business, education, healthcare, environmental preservation, and community
safety.
Chapter 12 describes the science goal of studying computational agents that
think, which often means understanding human perceptual, cognitive, and mo-
tor abilities so as to build computers that autonomously perform tasks as well
as or better than humans. It summarizes the innovation goal of developing
widely used products and services by using AI methods, which keep humans
in control. Both goals require science, engineering, and design research.
Chapter 13 focuses on ways to combine the best features of each goal. Those
who pursue the science goal build cognitive computers that they describe
as smart, intelligent, knowledgeable, and capable of thinking. The resulting
human-like products may succeed on narrow tasks, but these designs can
exacerbate the distrust, fears, and anxiety that many users have about their
computers. The innovation goal community believes that computers are best
designed to be supertools that amplify, augment, empower, and enhance hu-
mans. The combined strategy could be to design familiar HCAI user interfaces
with AI technologies for services such as text messaging suggestions and search
CHAPTER 11: GOALS OF AI RESEARCH? 91

query completion. AI technologies would also enable internal operations to


manage storage and transmit optimally across complex networks.
Chapter 14 raises these questions: do designers benefit from thinking of
computers as being teammates, partners, and collaborators? When is it helpful
and when is there a danger in assuming human–human interaction is a good
model for human–robot interaction? Innovation goal researchers and devel-
opers want to build tele-bots that extend human capabilities while providing
superhuman perceptual and motor support, thereby boosting human perfor-
mance, while allowing human–human teamwork to succeed. The combined
strategy could be to use science goal algorithms to implement automatic inter-
nal services that support the innovation goal of human control. This approach
is implemented in the many car-driving technologies such as lane following,
parking assist, and collision avoidance. The idea is to give users the control they
desire by putting “AI inside,” which provides valuable services to users based
on machine and deep learning algorithms. In this way users have the benefit
of AI optimizations, an understanding of what is happening, a clear model of
what will happen next, and the chance to take control if needed.
Chapter 15 discusses the science goal for products and services that are au-
tonomous with no human intervention. Rather than assured autonomy systems
acting alone, innovation goal researchers want to support control centers and
control panels (sometimes called human-in-the-loop or humans-on-the-loop),
in which humans operate highly automated devices and systems. The combined
strategy could be to have science goal algorithms provide highly automated fea-
tures, with user interface designs that support human control and oversight.
This combined strategy is in use in many NASA, industrial, utility, military,
and air-traffic control centers, where rich forms of AI are used to optimize per-
formance but the operators have a clear mental model of what will happen next.
Predictable behavior from machines is highly valued by human operators.
Chapter 16 covers the many attempts by science goal advocates to build so-
cial robots over hundreds of years, which have attracted widespread interest.
At the same time, active appliances, mobile devices, and kiosks are widespread
consumer successes. Innovation goal champions prefer designs that are seen
as steerable instruments, which increase flexibility or mobility, while being
expendable in rescue, disaster, and military situations. The combined design
could be to start with human-like services, which have proven acceptance, such
as voice-operated virtual assistants. These services could be embedded in ac-
tive appliances which give users control of features that are important to them.
92 PART 3: DESIGN METAPHORS

Innovation goal thinking also leads to better than human performance in ac-
tive appliances, such as using four-wheeled or treaded robots to provide the
mobility over rough terrain or floods, maneuverability in tight spaces, and
heavy-lifting capacity. Active appliances can also have superhuman sensors,
such as infrared cameras or sensitive microphones, and specialized effectors,
such as drills on Mars Rovers and cauterizing tools on surgical robots.
Awareness of the different goals can stimulate fresh thinking about how to
deal with different contexts by creating combined designs that leads to reliable,
safe, and trustworthy systems.9 Chapter 17 summarizes the design trade-offs so
as to find happy collaborations between the science goal and innovation goal
communities.
CHAPT ER 12

Science and Innovation Goals

A
I researchers and developers have offered many goals such as this one
in the popular textbook by Stuart Russell and Peter Norvig: “1. Think
like a human. 2. Act like a human. 3. Think rationally. 4. Act ratio-
nally.”1 A second pair of textbook authors, David Poole and Alan Mackworth,
write that the science goal is to study “computational agents that act intelli-
gently.” They want to “understand the principles that make intelligent behavior
possible in natural or artificial systems . . . by designing, building, and experi-
menting with computational systems that perform tasks commonly viewed as
requiring intelligence.”2
Others see AI as a set of tools to augment human abilities or extend their
creativity. For simplicity, I focus on two goals: science and innovation. Of
course, some researchers and developers will be sympathetic to goals that fall in
both communities or even other goals that fall in between. The sharply defined
science and innovation goals are meant to clarify important distinctions, but
individuals are likely to have more complex beliefs.

Science Goal
A shortened version of the science goal is to understand human perceptual,
cognitive, and motor abilities so as to build computers that perform tasks as
well as or better than humans. This goal includes the aspirations for social
robots, common-sense reasoning, affective computers, machine consciousness,
and artificial general intelligence (AGI).
Those who pursue the science goal have grand scientific ambitions and
understand that it may take 100 or 1000 years, but they tend to believe that
94 PART 3: DESIGN METAPHORS

researchers will eventually be able to understand and model humans faithfully.3


Many researchers in this AI community believe that humans are machines,
maybe very sophisticated ones, but they see building exact emulations of hu-
mans as a realistic and worthwhile grand challenge. They are cynical about
claims of human exceptionalism or that humans are a separate category from
computers. The influential AI 100 Report states that “the difference between an
arithmetic calculator and a human brain is not one of kind [emphasis added],
but of scale, speed, degree of autonomy, and generality,” which assumes that
human and computer thinking are in the same category.4
The desire to build computers that match human abilities is an ancient and
deep commitment. Elizabeth Broadbent, a medical researcher at the Univer-
sity of Auckland, NZ, wrote thoughtfully about “Interactions with robots: The
truths we reveal about ourselves.”5 She pointed out: “Humans have a funda-
mental tendency to create, and the ultimate creation is another human.” This
phrase could be playfully interpreted as a desire to be a typical human parent,
but it is also a barbed reminder that the motivation of some AI researchers is
to create an artificial human.
The desire to create human-like machines influences the terminology and
metaphors that the science goal community feels strongly about. They often
describe computers as smart machines, intelligent agents, knowledgeable
actors, and are attracted to the idea that computers are learning and require
training, much as a human child learns and is trained. Science goal researchers
often include performance comparisons between humans and computers,
such as the capability of oncologists versus AI programs to identify breast
cancer tumors. Journalists, especially headline writers, are strongly attracted
to this competition idea, which make for great stories such as “How Robot
Hands are Evolving to Do What Ours Can” (New York Times, July 30, 2018) or
“Robots Can Now Read Better than Humans, Putting Millions of Jobs at Risk”
(Newsweek, January 15, 2018). In their book Rebooting AI, Gary Marcus and
Ernest Davis worry that “[t]he net effect of a tendency of many in the media
to over report technology results is that the public has come to believe that AI
is much closer to being solved than it really is.”6
Many science goal researchers and developers believe that robots can be
teammates, partners, and collaborators and that computers can be autonomous
systems that are independent, capable of setting goals, self-directed, and
self-monitoring. They see automation as merely carrying out requirements
anticipated by the programmers/designs, while autonomy is a step beyond
CHAPTER 12: SCIENCE AND INNOVATION 95

automation to develop novel goals based on new sensor data. Science goal
protagonists promote embodied intelligence through social (human-like or
anthropomorphic) robots, which are bio-inspired (or bionic) to resemble hu-
man forms.
Some researchers, legal scholars, and ethicists envision a future in which
computers will have responsibility and legal protection of their rights, much as
individual humans and corporations.7 They believe that computers and social
robots can be moral and ethical actors and that these qualities can be built into
algorithms. This controversial topic is beyond the scope of this book, which
focuses on design issues to guide near-future research and develop the next
generation of technology.

Innovation Goal
The innovation goal, some would call it the engineering goal, drives researchers
to develop widely used products and services by applying HCAI methods. This
goal typically favors tool-based metaphors, tele-bots, active appliances, and
control centers. These applications are described as instruments, apps, appli-
ances, orthotics, prosthetics, utensils, or implements, but I’ll use the general
term supertool. These AI-guided products and services are built into clouds,
websites, laptops, mobile devices, home automation, kiosks, flexible manufac-
turing, and virtual assistants. A science goal airport assistant might be a mobile
human-like robot that greets travelers at the entrance to guide them to their
check-in and departure gate. Some airport scenarios show robots who engage
in natural language conversation, offering help and responding to questions. By
contrast, an innovation goal airport supertool would be a smartphone app with
a map tailored to guide travelers, a list of security line wait times, and current
information on flights.
Researchers and developers who pursue innovations study human behavior
and social dynamics to understand user acceptance of products and services.
These researchers are typically enthusiastic about serving human needs, so they
often partner with professionals to work on authentic problems and take pride
in widely adopted innovations. They regularly begin by clarifying what the tasks
are, who the users are, and the societal/environmental impacts.8
The innovation goal community frequently supports high levels of human
control and high levels of computer automation, as conveyed by the two-
dimensional HCAI framework from Part 2. They understand that there are
96 PART 3: DESIGN METAPHORS

innovations that require rapid fully automatic operation (airbag deployment,


pacemakers, etc.) and there are applications in which users prefer full human
control (bicycle riding, piano playing, etc.). Between these extremes lies a
rich design space that combines high levels of human control and high lev-
els of automation. Innovation researchers normally recognize the dangers of
excessive automation and excessive human control, which were described in
the HCAI framework of Part 2. These HCAI researchers and developers in-
troduce interlocks that prevent human mistakes and controls that prevent
computer failures, while striving to find a balance that produces reliable, safe,
and trustworthy systems.
The desire to make commercially successful products and services means
that human–computer interaction methods such as design thinking, obser-
vation of users, user experience testing, market research, and continuous
monitoring of usage are frequent processes employed by the innovation goal
community. They recognize that users often prefer designs that are comprehen-
sible, predictable, and controllable because they are eager to increase their own
mastery, self-efficacy, and creative control. They accept that humans need to
be “in-the-loop” and even “on-the-loop,” respect that users deserve explainable
systems, and recognize that only humans and organizations are responsible,
liable, and accountable.9 They are sympathetic to audit trails, product logs,
or flight data recorders to support retrospective forensic analysis of failures
and near misses so as to improve reliability and safety, especially for life-
critical applications such as pacemakers, self-driving cars, and commercial
planes.10
Sometimes those pursuing innovations start with science-based ideas and
then do what is necessary to create successful products and services. For
example, science-based speech recognition research was an important foun-
dation of successful virtual assistants such as Apple’s Siri, Amazon’s Alexa,
Google’s Home, and Microsoft’s Cortana. However, a great deal of design was
necessary to make successful products and services that deftly guided users
and gracefully dealt with failures. These services deal with common requests
for weather, news, and information by steering users to existing web resources,
like Wikipedia, news agencies, and dictionaries.11
Similarly, natural language translation research was integrated into well-
designed user interfaces for successful websites and services. A third example
is that image understanding research enabled automatic creation of alt-tags,
which are short descriptions of images that enable users with disabilities and
others to know what is in a website image.
CHAPTER 12: SCIENCE AND INNOVATION 97

Autonomous social robots with bipedal locomotion and emotionally


responsive faces, inspired by the science goal, make appealing demonstrations
and videos. However, these designs often give way to four-wheeled rovers,
tread-based vehicles, or tele-operated drones without faces, which are needed
for success in the innovation goal.12 The language of robots may remain pop-
ular, as in “surgical robots,” but these are tele-bots that allow surgeons to do
precise work in tight spaces inside the human body.
Many science goal researchers believe that a general purpose social robot can
be made, which can serve tea to elders, deliver packages, and perform rescue
work. By contrast innovation goal researchers recognize that they have to tune
their solutions for each context of use. Nimble hand movements, heavy lifting,
or movement in confined spaces require specialized designs which are not at
all like a generic multipurpose human hand.
Lewis Mumford’s book Technics and Civilization has a chapter titled “The
Obstacle of Animism” in which he describes how first attempts at new tech-
nologies are misled by human and animal models.13 He uses the awkward term
“dissociation” to describe the shift from human forms to more useful designs,
such as recognizing that four wheels have large advantages over two feet in
transporting heavy loads over long distances. Similarly, airplanes have wings,
but they do not flap like bird wings. Mumford stressed that “the most ineffective
kind of machine is the realistic mechanical imitation of a man or other animal.”
He continues with this observation “circular motion, one of the most useful and
frequent attributes of a fully developed machine, is, curiously, one of the least
observable motions in nature” and concludes “for thousands of years animism
has stood in the way of . . . development.”14
As Mumford suggests, there are better ways to serve people’s needs than
making a human-like device. Many innovation goal researchers seek to support
human connections by collaborative software, such as Google Docs to enable
collaborative editing, shared databases to foster scientific partnerships, and
improved media for better communications. For example, tele-conferencing
services, like Zoom, Webex, and Microsoft Teams, expanded dramatically dur-
ing the COVID crisis as universities shifted to online instruction with live
instructors trying innovative ways to create engaging learning experiences,
complemented by automated massive online open courses (MOOCs). MOOC
designs, such as those from Khan Academy, edX, or Coursera, led learners
through their courses by providing regular tests that gave learners feedback
about their progress, enabling them to repeat material that they still had to
98 PART 3: DESIGN METAPHORS

master. Learners could choose to pace their progress and challenge themselves
with more difficult material when they were ready.
The COVID crisis also led businesses to quickly expand working from home
(WFH) options for their employees so they could continue to operate. Sim-
ilarly, family, friends, and communities adopted services, such as Zoom, to
have dinners together, weddings, funerals, town meetings, and much more.
Some talk about Zoom fatigue from hours of online meetings, while others find
it liberating to have frequent conversations with distant colleagues or family.
New forms of online interaction are quickly emerging that have new game-like
user interfaces, like Kumospace and Gather.town, which allow more flexibility,
and meeting organizers are finding creative ways to make engaging large group
events, while supporting small group side discussions.
Many forms of collaboration are supported by social media platforms, such
as Facebook, Twitter, and Weibo, which employ AI-guided services. These
platforms have attracted billions of users, who enjoy the social connections,
benefit from the business opportunities, connect communities in productive
ways, and support teamwork as in citizen science projects. However, many
users have strong concerns about privacy and security as well as the misuse
of social media by political operatives, criminals, terrorists, and hate groups to
spread fake news, scams, and dangerous messages. Privacy invasion, massive
data collection about individuals, and surveillance capitalism excesses are sub-
stantial threats. AI algorithms and user interface designs have contributed to
these abuses, but they also can contribute to the solutions, which must include
human reviewers and independent oversight boards.
Mumford’s thinking about serving human needs leads naturally to new pos-
sibilities: orthotics, such as eyeglasses, to improve performance; prosthetics, such
as replacements for missing limbs; and exoskeletons that increase a human’s
capacity to lift heavy objects. Users of orthotics, prosthetics, or exoskeletons
see these devices as amplifying their abilities.
The next four chapters address the four pairs of metaphors that are guid-
ing AI and HCAI researchers, developers, business leaders, and policy-makers.
They offer diverse perspectives, which can be useful in different contexts, and
can guide designers in combining features that put these metaphors to work.
CHAPT ER 13

Intelligent Agents and Supertools

B
y the 1940s, as modern electronic digital computers emerged, the
descriptions included “awesome thinking machines” and “electronic
brains.” George Washington University Professor Dianne Martin’s ex-
tensive review of survey data includes her concern:
The attitude research conducted over the past 25 years suggests that the “awesome
thinking machine” myth may have in fact retarded public acceptance of computers in
the work environment, at the same time that it raised unrealistic expectations for easy
solutions to difficult social problems.1
In 1950, Alan Turing provoked huge interest with his essay “Computing Ma-
chinery and Intelligence” in which he raises the question: “Can Machines
Think?”2 He proposed what has come to be known as the Turing Test or the
imitation game. His thoughtful analysis catalogs objections, but he ends with
“[w]e may hope that machines will eventually compete with men [and women]
in all purely intellectual fields.” Many artificial intelligence researchers who pur-
sue the science goal have taken up Turing’s challenge by developing machines
that are capable of carrying out human tasks such as playing chess, understand-
ing images, and delivering customer support. Critics of the Turing Test think
it is a useless distraction or publicity stunt with poorly described rules, but the
Loebner Prize has attracted participants and media attention since 1990 with
prizes for the makers of the winning program.3 The January 2016 issue of AI
Magazine was devoted to articles with many new forms of Turing Tests.4 Me-
dia theorist Simone Natale of the University of Turin sees the Turing Test as a
banal deception which builds on the readiness of people to accept devices that
simulate intention, sociality, and emotions.5
100 PART 3: DESIGN METAPHORS

An early, more nuanced vision came in J. C. R. Licklider’s 1960 description


of “man–computer symbiosis,” which acknowledged differences between hu-
mans and computers, but stated that they would be cooperative interaction
partners with computers doing the routine work and humans having insights
and making decisions.6
The widespread use of terms such as smart, intelligent, knowledgeable, and
thinking helped propagate terminology such as machine learning, deep learn-
ing, and the idea that computers were being trained. Neuroscience descriptions
of human brains as neural networks was taken up enthusiastically as a metaphor
for describing AI methods, further spreading the idea that computers were like
people.
IBM latched onto the term cognitive computing to describe their work on
the Watson system. However, IBM’s Design Director for AI transformation re-
ported in 2020 that it “was just too confusing for people to understand” and
added that “we say AI, but even that, we clarify as augmented intelligence.”7
Google has long branded itself as strong on AI, but their current effort em-
phasizes “People and AI Research” (PAIR).8 It appears that those pursuing the
innovation goal increasingly recognize that the suggestion of computer intel-
ligence should be tempered with a human-centered approach for commercial
products and services.
Journalists have often been eager proponents of the idea that computers
were thinking and that robots would be taking our jobs. Cover stories with
computer-based characters have been featured in popular magazines, such
as Newsweek in 1980, which reported on “Machines That Think,” and Time
Magazine in 1996, which asked “Can Machines Think?”
Graphic artists have been all too eager to show thinking machines, especially
featuring human-like heads and hands, which reinforce the idea that humans
and computers are similar. Common themes were a robot hand reaching out to
grasp a human hand and a robotic human sitting in the pose of Auguste Rodin’s
The Thinker sculpture. Popular culture in the form of Hollywood movies has
offered sentient computer characters, such as HAL in the 1968 film 2001: A
Space Odyssey and C3PO in the 1977 Star Wars. Human-like robotic characters
have also played central roles in terrifying films such as The Terminator and
the The Matrix, in charming films such as Wall-E and Robot & Frank, and in
thought provoking films such as Her and Ex Machina.9
Computers have been increasingly portrayed as independent actors or
agents that (“who”) could think, learn, create, discover, and communicate.
University of Toronto professor Brian Cantwell Smith highlights words that
CHAPTER 13: INTELLIGENT AGENTS AND SUPERTOOLS 101

should be avoided when discussing what computers can do, such as know,
read, explain, or understand.10 However, journalists and headline writers are at-
tracted to the idea of computer agency and human-like capabilities, producing
headlines such as:

Machines Learn Chemistry (ScienceDaily.com)


Hubble Accidentally Discovers a New Galaxy in Cosmic Neighborhood
(NASA)
The Fantastic Machine That Found the Higgs Boson (The Atlantic)
Artificial Intelligence Finds Disease-Related Genes (ScienceDaily.com)

These headlines are in harmony with the more popular notion that comput-
ers are gaining capabilities to match or exceed humans.11 Other writers have
voiced the human-centered view that computers are supertools that can am-
plify, augment, empower, and enhance people (see Figure 11.1). Support for the
idea of computers as supertools comes from many sources, including Profes-
sor Daniela Rus, head of the MIT Computer Science and Artificial Intelligence
Laboratory, who has said: “It is important for people to understand that AI is
nothing more than a tool . . . [with] huge power to empower us.”12
The supertool community included early HCAI researchers, such as Dou-
glas Engelbart,13 whose vision of what it meant to augment human intellect
was shown in his famed demonstration at the 1968 Fall Joint Computer Con-
ference.14 New York Times technology journalist John Markoff traces the history
of artificial intelligence versus intelligence augmentation (AI versus IA), in
his popular book Machines of Loving Grace: The Quest for Common Ground
between Humans and Robots.15 He describes controversies, personalities, and
motivations, suggesting that there is a growing belief that there are productive
ways to pursue both the science and innovation goals.
These issues were the heart of my 1997 debates with MIT Media Lab’s Pattie
Maes. Our positions represented two views of what future computers would be
like, well before there were ubiquitous smartphones and 3 million touchscreen-
driven apps.16 My examples, based on the ideas of direct manipulation, showed
users operating control panels with buttons, checkboxes, and sliders, so as to
change visual displays with maps, textual lists, arrays of photos, bar charts, line
graphs, and scatterplots.17 By contrast, Maes described how “a software agent
knows the individual user’s habits, preferences, and interests.” She argued that
“a software agent is proactive. It can take initiative because it knows what your
interests are.” Simply put, I argued for designs which gave users a high level of
control of potent automation, while Maes believed that software agents could
102 PART 3: DESIGN METAPHORS

reliably take over and do the work for the users.18 Even after twenty-five years
of progress that debate continues in ever-evolving forms.
Innovation goal developers are more likely to pursue designs of tool-like
products and services. They are influenced by many guidelines documents, such
as The Apple Human Interface Design Guidelines, which included two clear
principles: “User Control: . . . people—not apps—are in control . . . it’s usually
a mistake for the app to take over the decision-making” and “Flexibility: . . .
(give) users complete, fine-grained control over their work.”19 These guidelines
and others from IBM, Microsoft, and the US government20 have led to greater
consistency within and across applications, which makes it easier for users, es-
pecially those with disabilities, to learn new applications on laptops, mobile
devices, and web-based services.
On airplane flights it is common to see people using laptops for work, but I
was impressed when a blind woman with a white cane came to sit next to me.
Just after takeoff, she took out her laptop and began working on a business re-
port. She listened to the text through her earphones as she deftly revised and
formatted the document. When I spoke to her, she said she was head of acces-
sibility for the state of Utah. She confirmed that the accessibility guidelines and
features as implemented on common laptops enabled her to participate fully as
a professional.
The guidelines stressing user control, even for users with disabilities, have
led to widespread adoption of computing, but the attraction of robotic and
intelligent terminology and metaphors from the science goal remains promi-
nent, especially at the numerous AI conferences. The conference papers may
describe innovations but their approach is often to have a computer carry
out the task automatically, as in reading mammograms or self-driving cars.
However, there are strong innovation goal viewpoints which describe designs
in which humans operate supertools and active appliances. These viewpoints
are more likely to emerge at conferences such as Augmented Humans21 and
the dozens of human–computer interaction conferences run by groups such
as the ACM’s SIGCHI,22 User Experience Professionals Association,23 and
similar groups around the world. Events such as World Usability Day24 have
promoted academic, industry, and government activities. Combined ideas
come at conferences such as the ACM’s Intelligent User Interfaces.25
Application developers, who produce the 3 million applications in the Apple
and Google Play stores, have largely built tool-like user interfaces, even when
there is ample AI technology at work internally. These developers appreciate
that users often expect a device that is comprehensible, predictable, and under
their control.
CHAPTER 13: INTELLIGENT AGENTS AND SUPERTOOLS 103

The combined design could be to build science goal technologies for in-
ternal operations, while the users see empowering interfaces that give them
clear choices, as in GPS navigation systems, web search, e-commerce, and
recommender systems. AI-guided systems offer users suggestions in spelling
and grammar checkers, web search, and email writing. University of Washing-
ton professor Jeffrey Heer shows three ways of using AI-guided methods in
support of human control in more advanced applications such as data clean-
ing, exploratory data visualization, and natural language translation.26 Many
others have proposed similar strategies for giving users control panels to oper-
ate their AI-guided recommender systems, such as sliders to choose music or
check boxes to narrow e-commerce searches. These are covered in Chapter 19’s
section “Explainable User Interface.”
A second way to combine the goals is seen in the familiar example of the
well-designed integration of automated features and human control of the cell
phone digital camera. These widely used devices employ AI features such as
high dynamic range lighting control, jitter removal, and automatic focus, but
give users control over composition, portrait modes, filters, and social media
postings.
A third way to combine goals is to provide supertools with added fea-
tures that come from intelligent agent designs. Recommender systems fit
this approach by using AI algorithms to recommend movies, books, spelling
corrections, and search query possibilities. Users benefit from the improved su-
pertools but remain in control, since they can choose whether or not to follow a
recommendation.27 Novel recommendations could be in the form of suggested
meals based on the contents of your refrigerator or dietary shifts based on what
you have eaten. Another class of recommenders have been called coaches, such
as reviews of your piano playing to indicate where you strayed from the music
score or feedback to show you when your knee should have been bent further
during yoga practice. These coaching suggestions are best presented when you
complete your effort and on request in a gentle way that makes it your choice
to accept or ignore.
In summary, there are ways to combine the enthusiasm for intelligent agents
with the idea of human-controlled supertools that are consistent, comprehen-
sible, predictable, and controllable. Skillful combinations of AI with HCAI
thinking can improve the value and acceptance of products and services.
Chapter 14 takes on the second pair of metaphors that guides design possi-
bilities: teammates and tele-bots.
CHAPT ER 14

Teammates and Tele-bots

A
common theme in designs for robots and advanced technologies is
that human–human interaction is a good model for human–robot in-
teraction,1 and that emotional attachment to embodied robots is an
asset.2 Many designers never consider alternatives, believing that the way peo-
ple communicate with each other, coordinate activities, and form teams is the
only model for design. The repeated missteps stemming from this assump-
tion do not deter others who believe that this time will be different, that the
technology is now more advanced, and that their approach is novel.
Numerous psychological studies by Clifford Nass and his team at Stanford
University showed that when computers are designed to be like humans, users
respond and engage in socially appropriate ways.3 Nass’s fallacy might be de-
scribed as this: since many people are willing to respond socially to robots, it is
appropriate and desirable to design robots to be social or human-like.
However, what Nass and colleagues did not consider was whether other
designs, which were not social or human-like, might lead to superior perfor-
mance. Getting beyond the human teammate idea may increase the likelihood
that designers will take advantage of unique computer features, including
sophisticated algorithms, huge databases, superhuman sensors, information-
abundant displays, and powerful effectors. I was pleased to find that in later
work with grad student Victoria Groom, Nass wrote: “Simply put, robots fail as
teammates.”4 They elaborated: “Characterizing robots as teammates indicates
that robots are capable of fulfilling a human role and encourages humans to
treat robots as human teammates. When expectations go unmet, a negative
response is unavoidable.”
106 PART 3: DESIGN METAPHORS

Lionel Robert of the University of Michigan cautions that human-like robots


can lead to three problems: mistaken usage based on emotional attachment
to the systems, false expectations of robot responsibility, and incorrect beliefs
about appropriate use of robots.5 Still, a majority of researchers believe that
robot teammates and social robots are inevitable.6 That belief pervades the
human–robot interaction research community which “rarely conceptualized
robots as tools or infrastructure and has instead theorized robots predomi-
nantly as peers, communication partners or teammates.”7
Psychologist Gary Klein and his colleagues clarify ten realistic challenges to
making machines behave as effectively as human teammates.8 The challenges
include making machines that are predictable, controllable, and able to negoti-
ate with people about goals. The authors suggest that their challenges are meant
to stimulate research and also “as cautionary tales about the ways that technol-
ogy can disrupt rather than support coordination.” A perfect teammate, buddy,
assistant, or sidekick sounds appealing, but can designers deliver on this image
or will users be misled, deceived, and disappointed?9 Can users have the con-
trol inherent in a tele-bot while benefiting from the helpfulness suggested by
the teammate metaphor?
My objection is that human teammates, partners, and collaborators are very
different from computers. Instead of these terms, I prefer to use tele-bots to
suggest human controlled devices (see Figure 11.1). I believe that it is helpful
to remember that “computers are not people and people are not computers.”
Margaret Boden, a long-term researcher on creativity and AI at the University
of Sussex, makes an alternate but equally strong statement: “Robots are simply
not people.”10 I think the differences between people and computers include the
following:

Responsibility Computers are not responsible participants, neither legally


nor morally. They are never liable or accountable. They are a different category
from humans. This continues to be true in all legal systems and I think it will
remain so. Margaret Boden continues with a straightforward principle: “Hu-
mans, not robots, are responsible agents.”11 This principle is especially true in
the military, where chain of command and responsibility are taken seriously.12
Pilots of advanced fighter jets with ample automation still think of them-
selves as in control of the plane and responsible for their successful missions,
even though they must adhere to their commander’s orders and the rules of
engagement. Astronauts rejected designs of early Mercury capsules which had
no window to eyeball the re-entry if they had to do it manually—they wanted
CHAPTER 14: TEAMMATES AND TELE-BOTS 107

to be in control when necessary, yet responsive to Mission Control’s rules. Neil


Armstrong landed the Lunar Module on the Moon—he was in charge, even
though there was ample automation. The Lunar Module was not his partner.
The Mars Rovers are not teammates; they are advanced automation with an
excellent integration of human tele-operation with high levels of automatic
operation.
It is instructive that the US Air Force shifted from using the term unmanned
autonomous/aerial vehicles (UAVs) to remotely piloted vehicles (RPVs) so as
to clarify responsibility. Many of these pilots work from a US Air Force base
in Nevada to operate drones flying in distant locations on military missions
that often have deadly outcomes. They are responsible for what they do and
suffer psychological trauma akin to what happens to pilots flying aircraft in war
zones. The Canadian Government has a rich set of knowledge requirements
that candidates must have to be granted a license to operate a remotely piloted
aircraft system (RPAS).13 Designers and marketers of commercial products and
services recognize that they and their organizations are the responsible parties;
they are morally accountable and legally liable.14 Commercial activity is further
shaped by independent oversight mechanisms, such as government regulation,
industry voluntary standards, and insurance requirements.

Distinctive capabilities Computers have distinctive capabilities of sophisti-


cated algorithms, huge databases, superhuman sensors, information-abundant
displays, and powerful effectors. To buy into the metaphor of “teammate” seems
to encourage designers to emulate human abilities rather than take advan-
tage of the distinctive capabilities of computers. One robot rescue design team
described their project to interpret the robot’s video images through natural
language text messages to the operators. The messages described what the robot
was “seeing” when a video or photo could deliver much more detailed infor-
mation more rapidly. Why settle for a human-like designs when designs that
make full use of distinctive computer capabilities would be more effective.
Designers who pursue advanced technologies can find creative ways to em-
power people so that they are astonishingly more effective—that’s what familiar
supertools have done: microscopes, telescopes, bulldozers, ships, and planes.
Empowering people is what digital technologies have also done, through
cameras, Google Maps, web search, and other widely used applicataions.
Cameras, copy machines, cars, dishwashers, pacemakers, and heating, ven-
tilation, and air conditioning systems (HVAC) are not usually described as
108 PART 3: DESIGN METAPHORS

teammates—they are supertools or active appliances that amplify, augment


empower, and enhance people.

Human creativity The human operators are the creative force—for discov-
ery, innovation, art, music, etc. Scientific papers are always authored by people,
even when powerful computers, telescopes, and the Large Hadron Collider are
used. Artworks and music compositions are credited to humans, even if rich
composition technologies are heavily used. The human qualities such as pas-
sion, empathy, humility, and intuition that are often described in studies of
creativity are not readily matched by computers.
Another aspect of creativity is to give human users of computer systems the
ability to fix, personalize, and extend the design for themselves or to provide
feedback to developers for them to make improvements for all users. The con-
tinuous improvement of supertools, tele-bots, and other technologies depends
on human input about problems and suggestions for new features.

Those who promote the teammate metaphor are often led down the path
of making human-like designs, which have a long history of appealing robots,
but succeed only as entertainment, crash test dummies, and medical man-
nequins (Chapter 16). I don’t think this will change. There are better designs
than human-like rescue robots, bomb disposal devices, or pipe inspectors. In
many cases four-wheeled or treaded vehicles are typical, usually tele-operated
by a human controller.
The DaVinci surgical robot is not a teammate. It is a well-designed tele-bot
that enables surgeons to perform precise actions in difficult to reach small body
cavities (Figure 14.1). As Lewis Mumford reminds designers, successful tech-
nologies diverge from human forms.15 Intuitive Surgical, the developer of the
DaVinci systems for cardiac, colorectal, urological, and other surgeries, makes
clear that “Robots don’t perform surgery. Your surgeon performs surgery with
Da Vinci by using instruments that he or she guides via a console.”16
Many robotic devices have a high degree of tele-operation, in which an
operator controls activities, even though there is a high degree of automa-
tion. For example, drones are tele-bots, even though they have the capacity to
automatically hover or orbit at a fixed altitude, return to their take-off point,
or follow a series of operator-chosen GPS waypoints. The NASA Mars Rover
vehicles also have a rich mixture of tele-operated features and independent
movement capabilities, guided by sensors to detect obstacles or precipices, with
CHAPTER 14: TEAMMATES AND TELE-BOTS 109

Fig 14.1 DaVinci Surgical System from Intuitive Surgical


Source: http://www.intusurg.com

plans to avoid them. The control centers at NASA’s Jet Propulsion Labs have
dozens of operators who control various systems on the Rovers, even when
they are hundreds of millions of miles away. It is another excellent example
of combining high levels of human control and high levels of automation.
Terms like tele-bots and telepresence suggest alternative design possibili-
ties. These instruments enable remote operation and more careful control of
devices, such as when tele-pathologists control a remote microscope to study
tissue samples. Combined designs take limited, yet mature and proven features
of teammate models and embed them in devices that augment humans by direct
or tele-operated controls.
Another way that computers can be seen as teammates is by providing in-
formation from huge databases and superhuman sensors. When the results of
sophisticated algorithms are displayed on information-abundant displays, such
as in three-dimensional medical echocardiograms with false color to indicate
blood flow volume, clinicians can be more confident in making cardiac treat-
ment decisions. Similarly, users of Bloomberg Terminals for financial data see
their computers as enabling them to make bolder choices in buying stocks or
rebalancing mutual fund retirement portfolios (Figure 14.2). The Bloomberg
Terminal uses a specialized keyboard and one or more large displays, with
multiple windows typically arranged by users to be spatially stable so they
know where to find what they need. With tiled, rather than overlapped, win-
dows users can quickly find what they want without rearranging windows
or scrolling. The voluminous data needed for a decision is easily visible and
110 PART 3: DESIGN METAPHORS

Fig 14.2 A Bloomberg Terminal for financial analysts shows abundant data, arranged
to be spatially stable with non-overlapping windows.

clicking in one window produces relevant information in other windows. More


than 300,000 users pay $20,000 per year to have this supertool on their desks.
In summary, the persistence of the teammate metaphor means it has appeal
for many designers and users. While users should feel fine about describing
their computers as teammates, designers who harness the distinctive features
of computers, such as sophisticated algorithms, huge databases, superhuman
sensors, information-abundant displays, and powerful effectors may produce
more effective tele-bots that are appreciated by users as supertools.
CHAPT ER 16

Social Robots and Active


Appliances

T
he fourth pair of metaphors brings us to the popular and durable ideas
of social robots, sometimes called humanoid, anthropomorphic, or an-
droid, that are based on human-like forms. The contrast is with widely
used appliances, such as kitchen stoves, dishwashers, and coffee makers. That’s
just the beginning; there are also clothes washers and dryers, security systems,
baby monitors, and home heating, ventilation, and air-conditioning (HVAC)
systems. Homeowners may also use popular outdoor active appliances or tele-
bots that water gardens, mow lawns, and clean swimming pools. I call the more
ambitious designs active appliances because they have sensors, programmable
actions, mobility, and diverse effectors (see Figure 11.1). Active appliances do
more than wait for users to activate them; they can initiate actions at speci-
fied times or when needed, such as when temperatures change, babies cry, or
intruders enter a building.

History of Social Robots


Visions of animated human-like social robots go back at least to ancient Greek
sources, but maybe one of the most startling successes was in the 1770s. Swiss
watchmaker Pierre Jaquet-Droz created elaborate programmable mechanical
devices with human faces, limbs, and clothes. The Writer used a quill pen on
paper, The Musician played a piano, and The Draughtsman drew pictures,
but these became only museum pieces for the Art and History Museum in
118 PART 3: DESIGN METAPHORS

Neufchâtel, Switzerland. Meanwhile other devices such as printing presses,


music boxes, clocks, and flour mills became success stories.
The idea of human-created characters gained acceptance with classic stories
such as the Golem created by the sixteenth-century rabbi of Prague and Mary
Shelley’s Frankenstein in 1818. Children’s stories tell of the puppet maker Gep-
petto whose wooden Pinocchio comes to life and the anthropomorphic Tootle
the Train character who refuses to follow the rules of staying on the track. In
German polymath Johann Wolfgang von Goethe’s “Sorcerer’s Apprentice,” the
protagonist conjures up a broomstick character to fetch pails of water, but when
the water begins to flood the workshop, the sorcerer cannot shut it down. Worse
still, splitting it in half only generates twice as many broomsticks. In the twenti-
eth century, the metaphors and language used to discuss animated human-like
robots are usually traced back to Karel Capek’s 1920 play Rossum’s Universal
Robots.
These examples illustrate the idea of social robots; some are mechanical, bio-
logical, or made from materials like clay or wood, but they usually have human
characteristics such as two legs, a torso, arms, and a face with eyes, nose, mouth,
and ears. They may make facial expressions and head gestures, while speaking
in human-like voices, expressing emotion and showing personality.1
These captivating social robots have strong entertainment value that goes
beyond mere puppetry because they seem to operate autonomously. Children
and many adults are enthusiastic about robots as film characters, engaged with
robot toys, and eager to build their own robots.2 But moving from entertain-
ment to devices that serve innovation goals has proven to be difficult, except
for crash test dummies and medical mannequins.
One example of how human-like robot concepts can be misleading is the
design of early robot arms. The arms were typically like human arms: 16 inches

long, with five fingers and a wrist that rotated only 180 . These robot arms could
lift at most 20 pounds, limiting their scope of use. Eventually the demands
of industrial automation led to flexible manufacturing systems and powerful
dexterous robot arms, without human-like forms. Rather than just an elbow
and wrist, these robots might have five to six joints to twist in many directions
and lift hundreds of pounds with precision. Instead of five fingers, robot hands
might have two grippers or suction devices to lift delicate parts. This shift from
human-like to more sophisticated forms that are tailored to specific tasks is just
as Lewis Mumford predicted.
Still, serious researchers, companies, and even government agencies cre-
ated social robots. The US Postal Service built a life-sized human-like Postal
CHAPTER 16: SOCIAL ROBOTS AND ACTIVE APPLIANCES 119

Buddy in 1993 with plans to install 10,000 machines. However, they shut down
the project after consumers rejected the 183 Postal Buddy kiosks that were in-
stalled.3 Many designs for anthropomorphic bank tellers, like Tillie the Teller,
disappeared because of consumer disapproval. Contemporary banking systems
usually shun the name automatic teller machines in favor of automatic trans-
action machines or cash machines, which support patrons getting their tasks
done quickly without distracting conversations with a deceptive social robot
bank teller avatar. Voice commands would also bring privacy problems since
other people are waiting in line.
Nass’s fallacy, delivered by his consulting, contributed to the failure of Mi-
crosoft’s 1995 BOB, in which friendly onscreen characters would help users
do their tasks. This cute idea got media attention but in a year Microsoft shut
it down instead of bringing out version 2.0. Similarly, Microsoft’s Office 1997
Clippy (Clippit) was a too chatty, smiling paperclip character that popped up
to offer help, thereby interrupting the user’s effort and train of thought.
Other highly promoted ideas included Ananova, a web-based news-reading
avatar launched in 2000 but terminated after a few months. However, the idea of
a human-like news reader was revived by Chinese developers for the state news
agency Xinhua in 2018,4 and an improved version was released in June 2020.
Even cheerful onscreen characters, such as Ken the Butler in Apple’s famed 1987
Knowledge Navigator video and avatars in intelligent tutoring systems, have
vanished. They distracted users from the tasks they were trying to accomplish.
Manufacturer Honda created an almost-life-sized social robot named
Asimo, which was featured at trade events and widely reported in the media,
but that project was halted in 2018 and no commercial products are planned.5
A recent dramatic news event was when David Hanson’s Social Robotics com-
pany, whose motto is “We bring robots to life,” produced a human-sized talking
robot named Sophia, which gained Saudi Arabian citizenship.6 These pub-
licity stunts draw wide attention from the media, but they have not led to
commercial successes. The company is shifting its emphasis to a 14-inch-high,
low-priced programmable version, named “Little Sophia,” which could be used
for education and entertainment. It is described as a “robot friend that makes
learning STEM [science, technology, engineering, math], coding and AI a fun
and rewarding adventure for kids.”
At the MIT Media Lab, Cynthia Breazeal’s two decades of heavily promoted
demonstrations of emotive robots, such as Kismet,7 culminated in a business
startup, Jibo, which closed in 2019. Other social robot startups, including Anki
(maker of Cozmo and Vector) and Mayfield Robotics (maker of Kuri), also
120 PART 3: DESIGN METAPHORS

closed in 2019.8 These companies found happy users, but not enough of a
market with users who valued the social robot designs. Cornell University’s Guy
Hoffman wrote that “it now seems that three of the most viable contenders to
lead the budding market of social home robots have failed to find a sustainable
business model.”9 Hoffman remains optimistic that artists and designers could
make improved models that might still succeed: “A physical thing in your space,
moving with and around you, provides an emotional grip that no disembodied
conversant can.”
Guy Hoffman’s Cornell colleague Malte Jung, who heads the Robots in
Groups Lab, wrote me in an email, “I have trouble seeing a convincing design
argument for anthropomorphic robots (except maybe for a very small set of
use-cases). I think designing robots with anthropomorphic features of form or
behavior has several disadvantages including unnecessary complexity and risk
of raising expectations that can’t be met.” Jung sees opportunities in related di-
rections: “cars are becoming more and more like robots . . . we can use HRI
[human–robot interaction] design approaches to improve these systems.”10
Another direction that has drawn publicity is the idea of sex robots, fash-
ioned to be a human-sized, conversational companion that are a big step from
the popular sex toys used by many couples. Female and male sex robots can have
custom-tailored physical features with prices in the range of $3,000 to $50,000.
Customers can choose color, shape, size, and texture of erogenous zones and fa-
cial features, with heated skin as an option. The Campaign Against Sex Robots
has raised concerns about how these robots will harm women and girls, with
strong warnings about the dangers of child sex robots.
Some companies are managing to turn impressive demonstrations of
human-like robots into promising products that go beyond anthropomorphic
inspirations. Boston Dynamics,11 which began with two-legged two-armed so-
cial robots, has shifted to wheel-driven robots with vacuum suction for picking
up hefty packages in warehouses (Figure 16.1). The Korean automobile giant
Hyundai signaled support for Boston Dynamics’ technology by taking it over
in 2020, giving it a valuation of over $1 billion.
The Japanese information technology and investor giant Softbank, which
was also a part owner of Boston Dynamics, is devoted to products that
are “smart and fun!” Softbank acquired the French company, Aldebaran
Robotics, in 2012 and in 2014 they introduced the Pepper robot that featured a
four-foot-high human-like shape, with expressive head, arm, and hand move-
ments, and a three-wheeled base for mobility. Softbank claims that Pepper “is
optimized for human interaction and is able to engage with people through
CHAPTER 16: SOCIAL ROBOTS AND ACTIVE APPLIANCES 121

Fig 16.1 Mobile robot for moving boxes in a warehouse from Boston Dynamics.
Source: Handle™ robot image provided courtesy of Boston Dynamics, Inc.

conversation and his touch screen.”12 Its appealing design and conversational
capacity generated strong interest, leading to sales of more than 20,000 units.
Pepper is promoted for tasks such as welcoming customers, product informa-
tion delivery, exhibit or store guide, and satisfaction survey administration.13
A ten-week study of Pepper in a German elder care home found that the “older
adults enjoyed the interaction with the robot” in physical training and gaming,
but they “made it clear that they do not want robots to replace caregivers.”14
During the COVID-19 crisis, Pepper robots, wearing masks, were used in
shopping malls to remind customers to wear their masks. SoftBank Robotics
acquired the 2-foot-high human-like NAO robot from Aldebaran Robotics in
2015. NAO is more sophisticated than Pepper and more expensive, yet it has
sold more than 5000 units for healthcare, retail, tourism, and education ap-
plications. As an indication of the turbulence in social robotics, Softbank shut
down production of Pepper in June 2021.
In Japan, a country that is often portrayed as eager for gadgets and robots, a
robot-staffed hotel lasted only a few months, closing in 2019. It had robot clean-
ing staff and robot front-desk workers, including crocodile-like receptionists.
The company president remarked, “When you actually use robots you realize
that there are places where they aren’t needed—or just annoy people.”15 At the
same time, traditional automated soft drink, candy, and other dispensers are
widely successful in Japan and elsewhere. These have become more elaborate
with heated and cooled versions dispensing food and drinks.
122 PART 3: DESIGN METAPHORS

Controversy continues around uses of social robots for autism therapy.


Some studies report benefits from using robots with children who have dif-
ficulty with human relationships. These studies suggest that some children on
the autism spectrum may feel more comfortable engaging with robots, possi-
bly paving the way for improved relationships with people.16 Critics suggest
that the focus on technology, rather than the child, leads to early successes but
less durable outcomes. Neil McBride of De Monfort University worries that “if
we view the human as something more than a machine, we cannot possibly
devolve the responsibility for a therapeutic relationship to a mechanical toy.”17
However, play therapy with dolls and puppets could evolve to include robotic
characters.
The tension between believers and skeptics has grown stronger. David Wat-
son at the Oxford Internet Institute raises strong concerns: “the temptation to
fall back on anthropomorphic tropes . . . is at best misleading and at worst
downright dangerous.”18 He addresses the ethical issues of responsibility as
well: “The temptation to grant algorithms decision-making authority in socially
sensitive applications threatens to undermine our ability to hold powerful indi-
viduals and groups accountable for their technologically-mediated actions.”19
The believers in human-like social robots remain strong and hopeful that they
will find successful designs and large markets.20

Animal Robots
Although human-like social robots are appreciated by some users but are not
yet successful, many developers suggest that animal-like pet robots may be
appealing enough to find enthusiasm from consumers.
The PARO therapeutic robot is a synthetic fur-covered white seal-like robot
(Figure 16.2) that has touch, light, sound, temperature, and posture sensors,
so that it “responds as if it is alive, moving its head and legs, making sounds,
and . . . imitates the voice of a real baby harp seal.”21
PARO has been approved by the US Food and Drug Administration as a
Class 2 medical device. Some studies conducted during the past fifteen years re-
port successes in producing positive responses from patients (“it’s like a buddy,”
“it’s a conversation piece,” and “it makes me happy”) and indications of po-
tential therapeutic improvements.22 However, these are typically short-term
studies at the introduction of PARO, when patients are intrigued by the nov-
elty, but the long-term use is still to be studied. Some reports suggest that the
CHAPTER 16: SOCIAL ROBOTS AND ACTIVE APPLIANCES 123

Fig 16.2 Dr. Takanori Shibata holding his creation, PARO, a robot therapy device,
in June 2018.

novelty is an attraction, which stimulates discussion among residents, giving


them satisfying exchanges with their friends.
Other robot pets include SONY’s dog robot, AIBO, first released in 1999
and then shut down in 2006. Since the early models cannot be repaired, failed
AIBO robots had to be disposed of; however in Japan, owner devotion to their
robot pets led to hundreds of Buddhist funeral services to give a compassionate
farewell ceremony for a much appreciated companion. One temple’s head priest
commented sympathetically that Buddhism honors inanimate objects so “even
though AIBO is a machine and doesn’t have feelings, it acts as a mirror for
human emotions.”23
In 2018, SONY produced an updated version with a $2900 price tag
that has impressive performance, even supporting robo-friendship between
pairs of AIBOs. These robots could be entertaining, as the website says:
“AIBO’s eyes—sparkling with a clever twinkle—speak volumes, constantly
giving you a window into its feelings” (Figure 16.3).24 The new ver-
sion remains a popular demonstration, but has had modest commercial
success.
In the same high price category as AIBO is the doglike MiRO-E from Conse-
quential Robots, which has a programming language to teach children coding
skills. A much cheaper alternative is the $129 Joy for All companion dog robot
developed by toymaker Hasbro in 2015 as a soft, cuddly, responsive companion
124 PART 3: DESIGN METAPHORS

Fig 16.3 SONY AIBO robot dog.


Source: photo by Ben Shneiderman

for older adults with mild to moderate dementia or loneliness. Ageless Innova-
tion was spun out as a separate company in 2018, devoted to making enjoyable
toys for older adults (Figure 16.4).25 More than ten studies by independent re-
searchers report satisfaction and benefits for many users of the dog or cat pet
robots. Their founder and chief executive officer, Ted Fischer, told me that their
sales have exceeded half a million pet companion robots. The low price means
older adults in nursing homes can have their own pet robot, rather than only
access to an expensive device for a few hours a week. The Joy for All companion
pet barks or meows, moves its head and body, has a heartbeat, and responds to
voice, touch, and light.
An emerging option is the Tombot dog robot designed for older adults suf-
fering from mild to moderate dementia and other problems.26 The dog robot,
covered with a synthetic fur, rests in place, but has rich behaviors of head,
mouth, tail, and ear movements (Figure 16.5). It makes gentle barking and
growling sounds when left alone but is responsive to voice, touch, stroking,
and caresses. The dog robot was designed by Jim Henson’s Creature Shop for
Tombot and prototype-tested by university researchers. Deliveries to pre-paid
CHAPTER 16: SOCIAL ROBOTS AND ACTIVE APPLIANCES 125

Fig 16.4 Joy for All Companion pet dog barks, responds to voice, and has a heartbeat.

buyers will begin in 2022 and then sales will open to a long waiting list of people
who have expressed interest in buying it at the $400 price.
I found dozens of robot animals in online toy stores, so I thought I would
try one. I bought a programmable robot dog from Fisca that was more like
the famed AIBO, but priced at just $60. It had hundreds of five-star ratings, so
it seemed like a worthwhile purchase (Figure 16.6). Its handheld remote con-
troller enabled me to make it go forward or back, turn left or right, shake its
head, blink its eyes, play music, and make dog-like sounds. Its plastic body is
durable and mobility on wheels was amusing, but it lacked the soft cuddly qual-
ities of other pet robots. I was able to program it to move around and dance,
but after an hour, I had enough of it. I felt it was good value and performed as
advertised, but I didn’t have further use for it.
A vastly different programmable dog robot comes from Boston Dynamics.
Their $70,000 dog-like four-legged robot called Spot, has impressive capabil-
ities to walk, run, turn, and hop in outdoor environments to avoid objects
and climb stairs (Figure 16.7). It comes with a well-designed game-like con-
troller that resembles a drone controller, making it easy to learn to operate.
126 PART 3: DESIGN METAPHORS

Fig 16.5 Tombot with CEO Tom Stevens.


Source: http://www.tombot.com

Spot’s multiple cameras and sensors enable it to perform complex actions au-
tonomously on rough terrain, recover from falls, and navigate narrow passages.
It offers yet another example of a combined design in which rapid actions, such
as movement and stability preservation, are done autonomously, while longer
term functions and planning are carried out by human operators. It can conduct
security patrols with sound or fire sensors, be fitted with cameras for pipeline
inspection, or carry equipment to support military missions.27
However, the Spot User Manual cautions, “Spot is not suitable for tasks that
require operation in close proximity to people. People must stay a safe distance
(at least two meters) from Spot during operation to avoid injury. . . . Spot may
sometimes move unpredictably or fall. Only use Spot in areas where a fall or
collision will not result in an unacceptable risk.”
A review of eighty-six papers with seventy user studies during the period
2008–2018 showed the persistent research interest in social robots, but revealed
the sparseness of long-term studies of acceptance which would indicate inter-
est past the novelty stage.28 However, this review suggests potential for social
robots in motivating children to be more physically active.
CHAPTER 16: SOCIAL ROBOTS AND ACTIVE APPLIANCES 127

(a) (b)
Forward/
Volume Stop/
Up/Down Backward

Turn Left Turn Right

Look Left Look Right

Dog Patrol Blink

Head Touch Head Spin

Perform Programming
Demo

4 Songs

Fig 16.6 Programmable dog robot from Fisca with controller to make it move, play music,
make head movements, and record sequences of actions to be replayed.

Fig 16.7 Boston Dynamics’ Spot robot and remote controller.


Source: https://www.bostondynamics.com/spot.

In summary, human-like social robots have yet to be successful, but pet dog
or cat robots appear to be gaining acceptance for older adults, and possibly for
some children. The key to success seems to be to find a genuine human need
that can be served by an animal-like robotic device.

Active Appliances
Another category that is different from social and animal robots might be
called appliance robots, although I will call them active appliances. This large
128 PART 3: DESIGN METAPHORS

Fig 16.8 Google Nest Learning Thermostat.


Source: https://store.google.com/product/nest_learning_thermostat_3rd_gen

class of successful consumer products do necessary tasks such as kitchen work,


laundry chores, house cleaning, security monitoring, and garden maintenance.
There are also many entertainment centers and a wide range of increasingly so-
phisticated exercise equipment. The control panels for these active appliances
(examples in Chapter 9) have received increased attention as designers accom-
modate richer touchscreen controls, refined user needs, and more ambitious
products. Increasingly home control devices depend on machine learning to
recognize patterns of usage, such as for temperature control in the Google Nest
Learning Thermostat (Figure 16.8).
An informal survey of AI colleagues showed me that about half of them ac-
cepted the idea of calling appliances robots, but the other half felt that the lack
of mobility and human-like form disqualify them from being called robots.
I feel privileged that our kitchen has seven active appliances with sensors
to detect and adjust activity. Most of them wait for me to activate them, but
they can be programmed to start at preset times or when sensors indicate
the need for activity. Increasingly they use AI methods to save energy, rec-
ognize users, increase home security, or communicate through voice user
interfaces.29
Still there is much room for improvement in the frustrating designs which
are often internally inconsistent, difficult to learn, and vary greatly across de-
vices. They all have a clock on them, some also have a date display, but each
CHAPTER 16: SOCIAL ROBOTS AND ACTIVE APPLIANCES 129

has a different user interface for setting the time and date, even though sev-
eral come from the same manufacturer. Dealing with changes due to daylight
savings time is an unnecessary challenge because of the different designs and
especially because that task could be automated. Setting timers on my oven, mi-
crowave, coffee maker, rice cooker, and other devices could be standardized, as
could status displays of remaining time to completion or current device temper-
atures. Better still might be to have all these devices operable from my mobile
phone or laptop, which would allow me to control them even while I was away
from home.
Another category of home-based active appliances that could be made bet-
ter with consistent user interfaces are medical devices. Commonly used devices
include a thermometer, bathroom scale, step counter, blood pressure monitor,
and pulse oximeter. Specialized devices include glucose monitors for insulin
patients, menstrual cycle monitors for women, and continuous positive airway
pressure devices for people dealing with sleep apnea. A large growth area is
exercise machines, such as treadmills, elliptical exercise machines, standing bi-
cycles, and rowing machines. Several manufacturers advertise “AI-powered” to
indicate that the exercise sessions are personalized by machine learning tech-
niques. Some of these devices have time and date settings with differing control
panels and very different displays of data. Here again a consistent user interface
and operation from a mobile device or laptop would be especially helpful. For
these medical and well-being needs, users often have to record their history
over months and years, so automatic recording would be especially valuable in
helping users track their health and condition. These health histories could be
analyzed to identify changes and shared with clinicians to improve treatment
plans.
A prominent consumer success comes from iRobot, which makes the
Roomba floor-cleaning machine (Figure 16.9).30 They and other companies,
such as Dyson, Samsung, and SONY, sell related products for mopping floors,
mowing lawns, and cleaning swimming pools. These robots have mobility, sen-
sors, and sophisticated algorithms to map spaces while avoiding obstacles. I’m a
happy user of the Roomba irobot, which vacuums our floors and rugs, then re-
turns to its base station to recharge and expel the dirt into a bag. It is controllable
from my smartphone and can be tele-operated.
While the early Roomba designs were influenced by autonomous thinking
with limited user control (only three simple buttons) and minimal feedback,
the recent versions provide better user control by way of smartphone user inter-
faces. Users can schedule cleaning of the whole apartment or individual rooms,
130 PART 3: DESIGN METAPHORS

Fig 16.9 Roomba 700 series robotic vacuum cleaner sold by iRobot.

but the Roomba sensors can detect that the house is vacant so it can clean. The
smartphone app shows a history of activity, but the most impressive feature is
the floor map that it generated after two to three cleaning sessions, including
blank spaces where sofas and beds blocked Roomba’s movement (Figure 16.10).
With some effort I configured and labeled our apartment in meaningful ways so
we could specify individual rooms for more frequent cleaning, like our hallway,
which often collects dirt tracked in on our shoes.
The issues of control panels, user control, history keeping, and feedback on
performance will grow as active appliances become more elaborate. A careful
review of fifty-two studies of robot failures by Shanee Honig and Tal Oron-
Gilad at Ben Gurion University in Israel provides guidance that could lead to
greater success.31 They discuss how robots might indicate that a failure has
occurred, by way of a control panel on the robot, a voice message, or a dis-
play on a remote monitoring device. They also address the questions of how
robots can report failures to users and how users can take steps to recover
from a failure. Further work would be to develop systematic failure reporting
methods, which could accelerate improvements. While they focused on aca-
demic research reports, incident reporting on robot failures in medical and
industrial environments is growing. More recent work by Honig and Oron-
Gilad analyzed online customer reviews on Amazon to understand failures
of domestic robots, how often they occur, and how they influence customer
opinions.32
CHAPTER 16: SOCIAL ROBOTS AND ACTIVE APPLIANCES 131

(a) (b)

Fig 16.10 Roomba home screen and generated apartment map with room labels supplied
by the author.

Promising home services include cleaning, maintenance, and gardening


technologies, which make life easier and expand the range of what people can
do. Chapter 26 has a long list (Table 26.1) of older adult needs, such as enter-
tainment, medical devices, and security systems, which are all candidates for
active appliances in many homes.

Voice and Text User Interfaces


A notable success for the idea of active appliances are the speech-based virtual
assistants such as Amazon’s Alexa, Apple’s Siri, Google’s Home, and Microsoft’s
Cortana. The designers have produced impressive speech recognition and ques-
tion answering systems, with high-quality speech generation that has gained
consumer acceptance. When questions cannot be answered, links to web pages
are offered to help users learn more about their questions. These devices avoid
human forms, using pleasant-looking cylinders with textured surfaces to make
them fit in many home settings. This means they don’t fit strict definitions
of social robots, but their success deserves attention and may suggest other
opportunities for home-based devices that could be operated by voice.33
132 PART 3: DESIGN METAPHORS

An ambitious study of users of eighty-two Amazon Alexa and eighty-eight


Google Home devices revealed that more than a quarter of the usage was to
find and play music. Searching for information and controlling home devices
like lights and heating were other frequent tasks. Less frequent uses were for
setting timers and alarms, requesting jokes, and getting weather conditions or
predictions.34
Most users treat voice user interfaces as a way of getting tasks done; they
are not friends, teammates, partners, or collaborators—they are supertools. Yes,
users are likely to explore the range of possibilities by asking for a hug, mak-
ing impossible requests, and posing probing questions like “Are you alive?” to
which Siri replies: “I’m not a person, or a robot. I’m software, here to help.”
Voice dictation is another success story, which allows text input and editing
for users who have difficulty in typing due to injuries, loss of motor control,
or visual disabilities. Another use case is users whose hands are busy such as
doctors who can dictate their findings while they are examining patients or re-
viewing X-rays, lab reports, and other documents. The specialized terminology
of medical care leads to higher rates of accurate speech recognition than with
informal conversations.
Phone-based voice user interfaces have been another notable success, with
generally good-quality speech recognition, even in noisy environments. The
speaker independent design, which avoids the need to train the system, is usu-
ally effective, although speakers with accents and speech impediments have had
problems.
Voice readers are a growing application with successes in electronic books
and magazines, which can use synthesized voices, but human voices are pop-
ular because they convey appropriate emotion, accurate pronunciation, and
appealing prosody. These voice readers help users with visual disabilities, but
are also popular on long car trips or while taking a walk. Voice control over web
browsers or other applications benefits users with visual disabilities and those
who are temporarily disabled by injuries.
Cathy Pearl’s book Designing Voice User Interfaces reminds readers that the
goal “shouldn’t be to fool people into thinking it’s a human; it should be to solve
the user’s problem in an efficient easy-to-use way.”35 Voice can be quicker than
typing, works when user’s hands are busy, and can be operated at a distance.
Of course, there are limitations to voice user interfaces: they can interfere with
human conversations, are ephemeral, and deliver less information than visual
user interfaces on mobile devices or larger screens. There is a vast difference
between visual and voice user interfaces; both are valuable in different contexts.
CHAPTER 16: SOCIAL ROBOTS AND ACTIVE APPLIANCES 133

Another caveat is that speaking is cognitively demanding, robbing users of


some of their short-term and working memory, so that performing concurrent
tasks becomes more difficult.36 That is one reason voice user interfaces have
not been used in high workload situations such as for fighter pilots who can
plan their attack scenarios more effectively if they operate the plane with hand
controls.
Despite the success of voice-based virtual assistants, talking dolls have failed
to draw consumer success. Early efforts began with Thomas Edison in the 1880s
and were regularly revived as toys, including the Mattell Talking Barbie in 1992
and a more ambitious Hello Barbie version in 2015.37 Mattell has no further
plans to pursue a talking Barbie.
Text user interfaces to chatbots are another path followed by researchers
and entrepreneurs, especially in the form of customer service agents on mo-
bile apps and websites. While text chatbots do not adhere to strict definitions
of social robots because they are typically just text and may not have faces, they
are designed with social conventions of politeness, humor, eagerness to help,
and apologies for their failures. Microsoft’s Tay chatbot was shutdown within a
day because its algorithms began to make offensive statements, but its succes-
sor Zo lasted more than two years before it was shut down in 2019. A greater
success is the Chinese Xiaoice, based on the same Microsoft technology, but
now spun off as a Chinese company. Xiaoice has hundreds of millions of users,
even though the Chinese government has forced its design to avoid politically
sensitive topics. Xiaoice was given the personality of a cheerful female teenager,
who reads news stories, sings songs, and makes art.
Replika, an “AI companion who cares . . . always here to listen and talk,” has
a human-like face, which users can configure.38 Over time it absorbs your per-
sonality so as to become a sympathetic conversational friend (Figure 16.11),
carrying forward Joseph Weizenbaum’s ELIZA work from the 1960s. The de-
sign is meant to be helpful to those suffering losses, just as the founder Eugenia
Kuyda did, so in addition to the chat window, it provides links to suicide
hotlines.
Woebot is another chatbot project tied to human mental health: “we’re
building products that are designed to bring quality mental health care to
all.”39 It uses cognitive behavioral therapy in textual chats designed to engage
with people who are dealing with depression and anxiety. Its research trials,
supported by the US National Institutes of Drug Abuse, have shown benefits
with hundreds of users who felt improvements in two to four weeks, but more
rigorous comparisons with other treatments is needed.
134 PART 3: DESIGN METAPHORS

Fig 16.11 Replika chatbot with discussion session.


Source: https://replika.ai/

A common aspiration for chatbot developers is for customer support to


answer questions about products and services, such as dinner reservations,
banking rules, tourism, or e-commerce purchases. Early versions were flawed,
but continued efforts suggest that some durable roles will emerge for text user
interfaces to chatbots,40 if only to identify user needs so as to steer them to the
correct human customer service representative. The presence or absence of a
chatbot avatar may also influence short-term acceptance, but long-term usage
deserves more study. As with voice user interfaces, some degree of human-like
personality features are appreciated, which appears to be different from em-
bodied social robots where the reception is mixed.41 However, even customer
service chatbots defined by appealing metaphors will have to deliver genuine
value, just as voice-controlled virtual assistant do. Engaging in a dialog is more
challenging than answering a question, playing a song, or finding a website.

The Future of Social Robots


The modest commercial adoption of social robots has not deterred many
researchers and entrepreneurs who still believe that they will eventually
CHAPTER 16: SOCIAL ROBOTS AND ACTIVE APPLIANCES 135

succeed. The academic research reports present a mixed view with studies
from developers showing user satisfaction and sometimes delight, while oth-
ers studies show preference for more tool-like designs, which adhere to the
principles of giving users control of comprehensible, predictable, and con-
trollable interfaces.42 An academic survey of 1489 participants studied fear of
autonomous robots and artificial intelligence (FARAI). This fear, dubbed robo-
phobia,43 was a minor issue for 20.1% and a serious issue for 18.5% of the
participants.44 Milder forms of concern about robots is over the uncanny valley,
where near-human designs are distrusted.45
Human willingness to engage with social robots was the focus of dozens of
studies conducted by Clifford Nass, a Stanford psychologist, and his students.46
They found that people were quick to respond to social robots, accepting them
as valid partners in interactions. However, the central question remains: Would
people perform more effectively and prefer a supertool or active appliance de-
sign? Human control by way of comprehensible user interfaces is a key concept
from developers of banking machines and most mobile devices, household ap-
pliances, kiosks, office technologies, and e-commerce websites. As mentioned
earlier, the Apple Human Interface Design Guidelines advocate: “User Control:
. . . people—not apps—are in control” and “Flexibility: . . . (give) users com-
plete, fine-grained control over their work.”47 The fact that many users will treat
active appliances as social robots is not surprising or disturbing to me—I do it
myself. However, what is surprising and disturbing to me is that so many re-
searchers and designers find it hard to open their imagination to go from social
robots to active appliances.
The lessons of history are clear. Early anthropomorphic designs and social
robots gave way to functional banking machines that support customer control,
without the deception of having a human-like bank teller machine or screen
representation of a human bank teller. This deception led bank customers to
wonder how else their bank was deceiving them, thereby undermining the trust
needed in commercial transactions. What customers wanted was to get their
deposit or withdrawal done as quickly and reliably as possible. They wanted
active appliances with control panels that they understood and enabled them to
reliably and predictably get what they wanted. Even leading AI researchers, such
as University of California-Berkeley’s Stuart Russell, clearly state that “there is
no good reason for robots to have humanoid form . . . they represent a form of
dishonesty.”48
These historical precedents can provide useful guidance for contemporary
designers pursuing the innovation goal, since many still believe that improved
136 PART 3: DESIGN METAPHORS

designs of social robots based on the science goal will eventually succeed. A
commonly mentioned application is elder care in which users wishing to live
independently at home will need a human-like social robot to use kitchen im-
plements designed for humans, to navigate hallways and stairs, and to perform
tasks such as administering medications or offering a cup of tea. Another pro-
posed application of social robots is disaster relief, but in that area the shift
towards tele-bots has led to notable successes. Robin Murphy from Texas A&M
University, who is a leader in developing, testing, and fielding rescue robots, ad-
vocates agile robots that can go under buildings or through ventilation ducts,
and tele-operated drones that can fly into dangerous places to provide video for
human decision-makers.49
Inspired by the science goal, social robot advocates continue to believe that
they are needed, especially when the requirement is to work in environments
built for human activity. However, I believe that if the imagination of these
designers were more open they would see new possibilities, such as a small
dishwasher built into, under, or near a dining table to help those who are phys-
ically challenged. If designers still want to try social robots, they would do well
to understand what has been done in the past so they can build on the success
stories.
A pointed scenario is that, if transported back to 1880, some designers might
propose clothes-washing robots that pick up a bar of soap and a washboard to
scrub clothes one at a time, rinse them in a sink, and hang them on a clothesline
to dry. Designers of modern clothes washers and dryers have gone well be-
yond social robots to make active appliance successes that wash a week’s worth
of clothes and spin dry them in about an hour. Similarly, Amazon fulfillment
centers have many robots for moving products and packing boxes, but none of
them are human-like.50
The combined design strategy could be to use voice-based virtual assistants,
which have proven successful, in active appliance and medical device designs.
Text-based customer service chatbots could become successes, if dialogs can be
carried out with low error rates. Exploration of pet-like devices for therapeu-
tic applications and human-like guides for exercise sessions could be further
refined with long-term studies to understand what solutions remain appealing
and useful over time.

You might also like