How To Avoid The Ethical Nightmares of Emerging Technology

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

How to Avoid the Ethical Nightmares

of Emerging Technology
Facebook, which was created in 2004, amassed 100 million users
in just four and a half years. The speed and scale of its growth was
unprecedented. Before anyone had a chance to understand the
problems the social media network could cause, it had grown into
an entrenched behemoth.

In 2015, the platform’s role in violating citizens’ privacy and its


potential for political manipulation was exposed by the
Cambridge Analytica scandal. Around the same time, in
Myanmar, the social network amplified disinformation and calls
for violence against the Rohingya, an ethnic minority in the
country, which culminated in a genocide that began in 2016. In
2021, the Wall Street Journal reported that Instagram, which had
been acquired by Facebook in 2012, had conducted research
showing that the app was toxic to the mental health of teenage
girls.

Defenders of Facebook say that these impacts were unintended


and unforeseeable. Critics claim that, instead of moving fast and
breaking things, social media companies should have proactively
avoided ethical catastrophe. But both sides agree that new
technologies can give rise to ethical nightmares, and that should
make business leaders — and society — very, very nervous.

We are at the beginning of another technological revolution, this


time with generative AI — models that can produce text, images,
and more. It took just two months for OpenAI’s ChatGPT to pass
100 million users. Within six months of its launch, Microsoft
released ChatGPT-powered Bing; Google demoed its latest large
language model (LLM), Bard; and Meta released LLaMA.
ChatGPT-5 will likely be here before we know it. And unlike social
media, which remains largely centralized, this technology is
already in the hands of thousands of people. Researchers at
Stanford recreated ChatGPT for about $600 and made their
model, called Alpaca, open-source. By early April, more than
2,400 people had made their own versions of it.

While generative AI has our attention right now, other


technologies coming down the pike promise to be just as
disruptive. Quantum computing will make today’s data crunching
look like kindergarteners counting on their fingers. Blockchain
technologies are being developed well beyond the narrow
application of cryptocurrency. Augmented and virtual reality,
robotics, gene editing, and too many others to discuss in detail
also have the potential to reshape the world for good or ill.

Ethics in the Age of AI:


Series reprint

If precedent serves, the companies ushering these technologies


into the world will take a “let’s just see how this goes” approach.
History also suggests this will be bad for the unsuspecting test
subjects: the general public. It’s hard not to worry that, alongside
the benefits they’ll offer, the leaps in technology will come with a
raft of societal-level harm that we’ll spend the next 20-plus years
trying to undo.

It’s time for a new approach. Companies that develop these


technologies need to ask: “How do we develop, apply, and
monitor them in ways that avoid worst-case scenarios?” And
companies that procure these technologies and, in some cases,
customize them (as businesses are doing now with ChatGPT) face
an equally daunting challenge: “How do we design and deploy
them in a way that keeps people (and our brand) safe?”
In this article, I will try to convince you of three things: First, that
businesses need to explicitly identify the risks posed by these new
technologies as ethical risks or, better still, as potential ethical
nightmares. Ethical nightmares aren’t subjective. Systemic
violations of privacy, the spread of democracy-undermining
misinformation, and serving inappropriate content to children
are on everyone’s “that’s terrible” list. I don’t care which end of
the political spectrum your company falls on — if you’re
Patagonia or Hobby Lobby — these are our ethical nightmares.

Second, that by virtue of how these technologies work — what


makes them tick — the likelihood of realizing ethical and
reputational risks has massively increased.

Third, that business leaders are ultimately responsible for this


work, not technologists, data scientists, engineers, coders, or
mathematicians. Senior executives are the ones who determine
what gets created, how it gets created, and how carefully or
recklessly it is deployed and monitored.

These technologies introduce daunting possibilities, but the


challenge of facing them isn’t that complicated: Leaders need to
articulate their worst-case scenarios — their ethical nightmares —
and explain how they will prevent them. The first step is to get
comfortable talking about ethics.

After 20 years in academia, 10 of them spent researching,


teaching, and publishing on ethics, I attended my first
nonacademic conference in 2018. It was sponsored by a Fortune
50 financial services company, and the theme was
“sustainability.” Having taught courses on environmental ethics, I
thought it would be interesting to see how corporations think
about their responsibilities vis-à-vis their environmental impacts.
When I got there, I found presentations on educating women
around the globe, lifting people out of poverty, and contributing
to the mental and physical health of all. Few were talking about
the environment.
It took me an embarrassingly long time to figure out that in the
corporate and nonprofit worlds, “sustainability” doesn’t mean
“practices that don’t destroy the environment for future
generations.” Instead it means “practices in pursuit of ethical
goals” and an assertion that those practices promote the bottom
line. As for why businesses didn’t simply say “ethics,” I couldn’t
understand.

This behavior — of replacing the word “ethics” with some other,


less precise term — is widespread. There’s Environmental, Social,
and Governance (ESG) investing, which boils down to investing in
companies that avoid ethical risks (emissions, diversity, political
actions, and the like) on the theory that those practices protect
profits. Some companies claim to be “values driven,” “mission
driven,” or “purpose driven,” but these monikers rarely have
anything to do with ethics. “Customer obsessed” and “innovation”
aren’t ethical values; a purpose or mission can be completely
amoral (putting immoral to the side). So-called “stakeholder
capitalism” is capitalism tempered by a vague commitment to the
welfare of unidentified stakeholders (as though stakeholder
interests do not conflict). Finally, the world of AI ethics has grown
tremendously over the last five years or so. Corporations heard
the call, “We want AI ethics!” Their distorted response is, “Yes, we,
too, are for responsible AI!”

Ethical challenges don’t disappear via semantic legerdemain. We


need to name our problems accurately if we are to address them
effectively. Does sustainability advise against using personal data
for the purposes of targeted marketing? When does using a black
box model violate ESG criteria? What happens if your mission of
connecting people also happens to connect white nationalists?
Let’s focus on the move from “AI ethics” to “responsible AI” as a
case study on the problematic impacts of shifting language. First,
when business leaders talk about “responsible” and “trustworthy”
AI, they focus on a broad set of issues that include cybersecurity,
regulation, legal concerns, and technical or engineering risks.
These are important, but the end result is that technologists,
general counsels, risk officers, and cybersecurity engineers focus
on areas they are already experts on, which is to say, everything
except ethics.

Second, when it comes to ethics, leaders get stuck at very high-


level and abstract principles or values — on concepts such as
fairness and respect for autonomy. Since this is only a small part
of the overall “responsible AI” picture, companies often fail to
drill down into the very real, concrete ways these questions play
out in their products. Ethical nightmares that outstrip outdated
regulations and laws are left unidentified, and just as probable as
they were before a “responsible AI” framework is deployed.

Third, the focus on identifying and pursuing “responsible AI”


gives companies a vague goal with vague milestones. AI
statements from organizations say things like, “We are for
transparency, explainability, and equity.” But no company is
transparent about everything with everyone (nor should it be);
not every AI model needs to be explainable; and what counts as
equitable is highly contentious. No wonder, then, that the
companies that “commit” to these values quickly abandon them.
There are no goals here. No milestones. No requirements. And
there’s no articulation of what failure looks like.

But when AI ethics fail, the results are specific. Ethical


nightmares are vivid: “We discriminated against tens of
thousands of people.” “We tricked people into giving up all that
money.” “We systematically engaged in violating people’s
privacy.” In short, if you know what your ethical nightmares are
then you know what ethical failure looks like.

Understanding how emerging technologies work — what makes


them tick — will help explain why the likelihood of realizing
ethical and reputational risks has massively increased. I’ll focus
on three of the most important ones.
Let’s start with a technology that has taken
over the headlines: artificial intelligence, or AI. The vast majority
of AI out there is machine learning (ML).

“Machine learning” is, at its simplest, software that learns by


example. And just as people learn to discriminate on the basis of
race, gender, ethnicity, or other protected attributes by following
examples around them, software does, too.

Say you want to train your photo recognition software to


recognize pictures of your dog, Zeb. You give that software lots of
examples and tell it, “That’s Zeb.” The software “learns” from
those examples, and when you take a new picture of your dog, it
recognizes it as a picture of Zeb and labels the photo “Zeb.” If it’s
not a photo of Zeb, it will label the file “not Zeb.” The process is
the same if you give your software examples of what “interview-
worthy” résumés look like. It will learn from those examples and
label new résumés as being “interview-worthy” or “not interview-
worthy.” The same goes for applications to university, or for a
mortgage, or for parole.
In each case, the software is recognizing and replicating patterns.
The problem is that sometimes those patterns are ethically
objectionable. For instance, if the examples of “interview-worthy”
résumés reflect historical or contemporary biases against certain
races, ethnicities, or genders, then the software will pick up on it.
Amazon once built a résumé-screening AI. And to determine
parole, the U.S. criminal justice system has used prediction
algorithms that replicated historical biases against Black
defendants.

It’s crucial to note that the discriminatory pattern can be


identified and replicated independently of the intentions of the
data scientists and engineers programming the software. In fact,
data scientists at Amazon identified the problem with their AI
mentioned above and tried to fix it, but they couldn’t. Amazon
decided, rightly, to scrap the project. But had it been deployed, an
unwitting hiring manager would have used a tool with ethically
discriminatory operations, regardless of that person’s intentions
or the organization’s stated values.

Discriminatory impacts are just one ethical nightmare to avoid


with AI. There are also privacy concerns, the danger of AI models
(especially large language models like ChatGPT) being used to
manipulate people, the environmental cost of the massive
computing power required, and countless other use-case-specific
risks.
The details of quantum computers are
exceedingly complicated, but for our purposes, we need to know
only that they are computers that can process a tremendous
amount of data. They can perform calculations in minutes or even
seconds that would take today’s best supercomputers thousands
of years. Companies like IBM and Google are pouring billions of
dollars into this hardware revolution, and we’re poised to see
increased quantum computer integration into classical computer
operations every year.
Quantum computers throw gasoline on a problem we see in
machine learning: the problem of unexplainable, or black box, AI.
Essentially, in many cases, we don’t know why an AI tool makes
the predictions that it does. When the photo software looks at all
those pictures of Zeb, it’s analyzing those pictures at the pixel
level. More specifically, it’s identifying all those pixels and the
thousands of mathematical relations among those pixels that
constitute “the Zeb pattern.” Those mathematical Zeb patterns
are phenomenally complex — too complex for mere mortals to
understand — which means that we don’t understand why it
(correctly or incorrectly) labeled this new photo “Zeb.” And while
we might not care about getting explanations in the case of Zeb, if
the software says to deny someone an interview (or a mortgage, or
insurance, or admittance) then we might care quite a bit.

Quantum computing makes black box models truly impenetrable.


Right now, data scientists can offer explanations of an AI’s
outputs that are simplified representations of what’s actually
going on. But at some point, simplification becomes distortion.
And because quantum computers can process trillions of data
points, boiling that process down to an explanation we can
understand — while retaining confidence that the explanation is
more or less true — becomes vanishingly difficult.

That leads to a litany of ethical questions: Under what conditions


can we trust the outputs of a (quantum) black box model? What
are the appropriate benchmarks for performance? What do we do
if the system appears to be broken or is acting very strangely? Do
we acquiesce to the inscrutable outputs of the machine that has
proven reliable previously? Or do we eschew those outputs in
favor of our comparatively limited but intelligible human
reasoning?

Suppose you and I and a few thousand of our friends


each have a magical notebook with the following features: When
someone writes on a page, that writing simultaneously appears in
everyone else’s notebook. Nothing written on a page can ever be
erased. The information on the pages and the order of the pages is
immutable; no one can remove or rearrange the pages. A private,
passphrase-protected page lists your assets — money, art, land
titles — and when you transfer an asset to someone, both your
page and theirs are simultaneously and automatically updated.

At a very high level, this is how blockchain works. Each


blockchain follows a specific set of rules that are written into its
code, and changes to those rules are decided by whomever runs
the blockchain. But just like any other kind of management, the
quality of a blockchain’s governance depends on answering a
string of important questions. For example: What data belongs on
the blockchain, and what doesn’t? Who decides what goes on?
What are the criteria for what is included? Who monitors? What’s
the protocol if an error is found in the code of the blockchain?
Who makes decisions about whether a structural change should
be made to a blockchain? How are voting rights and power
distributed?
Bad governance in blockchain can lead to nightmare scenarios,
like people losing their savings, having information about
themselves disclosed against their wills, or false information
loaded onto people’s asset pages that enables deception and
fraud.

Blockchain is most often associated with financial services, but


every industry stands to integrate some kind of blockchain
solution, each of which comes with particular pitfalls. For
instance, we might use blockchain to store, access, and distribute
information related to patient data, the inappropriate handling of
which could lead to the ethical nightmare of widescale privacy
violations. Things seem even more perilous when we recognize
that there isn’t just one type of blockchain, and that there are
different ways of governing a blockchain. And because the basic
rules of a given blockchain are very hard to change, early
decisions about what blockchain to develop and how to maintain
it are extremely important.

Companies’ ability to adopt and use these technologies as they


evolve will be essential to staying competitive. As such, leaders
will have to ask and answer questions such as:
Why does this responsibility fall to business leaders as opposed to,
say, the technologists who are tasked with deploying the new
tools and systems? After all, most leaders aren’t fluent in the
coding and the math behind software that learns by example, the
quantum physics behind quantum computers, and the
cryptography that underlies blockchain. Shouldn’t the experts be
in charge of weighty decisions like these?

The thing is, these aren’t technical questions — they’re ethical,


qualitative ones. They are exactly the kinds of problems that
business leaders — guided by relevant subject matter experts —
are charged with answering. Off-loading that responsibility to
coders, engineers, and IT departments is unfair to the people in
those roles and unwise for the organization. It’s understandable
that leaders might find this task daunting, but there’s no question
that they’re the ones responsible.

I’ve tried to convince you of three claims. First, that leaders and
organizations need to explicitly identify their ethical nightmares
springing from new technologies. Second, a significant source of
risk lies in how these technologies work. And third, that it’s the
job of senior executives to guide their respective organizations on
ethics.

These claims fund a conclusion: Organizations that leverage


digital technologies need to address ethical nightmares before
they hurt people and brands. I call this the “ethical nightmare
challenge.” To overcome it, companies need to create an
enterprise-wide digital ethical risk program. The first part of the
program — what I call the content side — asks: What are the
ethical nightmares we’re trying to avoid, and what are their
potential sources? The second part of the program — what I call
the structure side — answers the question: How do we
systematically and comprehensively ensure those nightmares
don’t become a reality?
Ethical nightmares can be articulated with varying
levels of detail and customization. Your ethical nightmares are
partly informed by the industry you’re in, the kind of organization
you are, and the kinds of relationships you need to have with your
clients, customers, and other stakeholders for things to go well.
For instance, if you’re a health care provider that has clinicians
using ChatGPT or another LLM to make diagnoses and treatment
recommendations, then your ethical nightmare might include
widespread false recommendations that your people lack the
training to spot. Or if your chatbot is undertrained on information
related to particular races and ethnicities, and neither the
developers of the chatbot nor the clinicians know this, then your
ethical nightmare would be systematically giving false diagnoses
and bad treatments to those who have already been discriminated
against. If you’re a financial services company that uses
blockchain to transact on behalf of clients, then one ethical
nightmare might be the absence of an ability to correct mistakes
in the code — a function of ill-defined governance of the
blockchain. That could mean, for instance, being unable to call
back fraudulent transfers.

The Big Idea

Notice that articulating nightmares means naming details and


consequences. The more specific you can get — which is a
function of your knowledge of the technologies, your industry,
your understanding of the various contexts in which your
technologies will be deployed, your moral imagination, and your
ability to think through the ethical implications of business
operations — the easier it will be to build the appropriate
structure to control for these things.
While the methods for identifying the nightmares hold
across organizations, the strategies for creating appropriate
controls vary depending on the size of the organization, existing
governance structures, risk appetites, management culture, and
more. Companies’ overtures into this realm can be classified as
either formal or informal. In an ideal world, every organization
would take the formal approach. However, factors like limited
time and resources, the rate at which a company (truly or falsely)
believes it will be impacted by digital technologies, and business
necessities in an unpredictable market sometimes make it
reasonable to choose the informal approach. In those cases, the
informal approach should be seen as a first step, and better than
nothing at all.

The formal approach is systematic and comprehensive, and it


takes a good deal of time and resources to build. In short, it
centers around the ability to create and execute on an enterprise-
wide digital ethical risk strategy. Broadly speaking, it involves
four steps.

Education and alignment. First, all senior leaders need to


understand the technologies enough that they can agree on what
constitutes the ethical nightmares of the organization. Knowledge
and the alignment of leaders are prerequisites for building and
implementing a robust digital ethical risk strategy.

This education can be achieved by executive briefings,


workshops, and seminars. But it should not require — or try to
teach — math or coding. This process is for non-technologists and
technologists alike to wrap their heads around what risks their
company may face. Moreover, it must be about the ethical
nightmares of the organization, not sustainability or ESG criteria
or “company values.”
Gap and feasibility analyses. Before building a strategy, leaders
need to know what their organization looks like and what the
probability is of their nightmares actually happening. As such, the
second step consists of performing gap and feasibility analyses of
where your organization is now; how far away it is from
sufficiently safeguarding itself from an ethical nightmare
unfolding; and what it will take in terms of people, processes, and
technology to close those gaps.

To do this, leaders must identify where their digital technologies


are and where they will likely be designed or procured within
their organization. Because if you don’t know how the
technologies work, how they’re used, or where they’re headed,
you’ll have no hope of avoiding the nightmares.

Then a variety of questions present themselves:

The answers to questions like these will vary wildly across


organizations. It’s one reason why digital ethical risk strategies
are difficult to create and implement: They must be customized to
integrate with existing governance structures, policies, processes,
workflows, tools, and personnel. It’s easy to say “everyone needs a
digital ethical risk board,” in the model of the institutional review
boards that arose in medicine to mitigate the ethical risks around
research on human subjects. But it’s not possible to continue with
“and every one of them should look like this, act like this, and act
with other groups in the business like this.” Here, good strategy
does not come from a one-size-fits-all solution.

Strategy creation. The third step in the formal approach is


building a corporate strategy in light of the gap and feasibility
analyses. This includes, among other things, refining goals and
objectives, deciding on an approach to metrics and KPIs (for
measuring both compliance with the digital ethical risk program
and its impact), designing a communications plan, and
identifying key drivers of success for implementation.

Cross-functional involvement is needed. Leaders from


technology, risk, compliance, general counsel, and cybersecurity
should all be involved. Just as important, direction should come
from the board and the CEO. Without their robust buy-in and
encouragement, the program will get watered down.

Implementation. The fourth and final step is implementation of


the strategy, which entails reconfiguring workflows, training,
support, and ongoing monitoring, including quality assurance
and quality improvement.

For example, new procedures should be customized by business


domain or by roles to harmonize them with existing procedures
and workflows. These procedures should clearly define the roles
and responsibilities of different departments and individuals and
establish clear processes for identifying, reporting, and
addressing ethical issues. Additionally, novel workflows need to
seek an optimal balance of human-computer interaction, which
will depend on the kinds of tasks and the relative risks involved,
and establish human oversight of automated flows.

The informal approach, by contrast, usually involves the


following endeavors: providing education and alignment on
ethical nightmares by leaders; entrusting executives in distinct
units of the business (such as HR, marketing, product lines, or
R&D) with identifying the processes needed to complete an
ethical nightmare check; and creating or leveraging an existing
(ethical) risk board to advise various personnel — either on
individual projects or at a more institutional scale — when ethical
risks are detected.

This approach does not require official changes of policies,


harmonization and integration among departments, official
changes in governance structure, or similar actions. While it can
pack a powerful punch, it’s neither systematic nor
comprehensive. Ethical risks may fall between the cracks and
land on the front page.

In my work, I’ve found that the vast majority of organizations are


operated and led by good people who do not intend to harm
anyone. But I’ve also noticed that “ethics” is a word most
companies are uncomfortable using. It is considered subjective or
“squishy,” and outside the purview of business.

Both these views are incorrect. Invading people’s privacy,


automating discrimination at scale, undermining democracy,
putting children in harm’s way, and breaching people’s trust are
decidedly clear-cut. They’re ethical nightmares that pretty much
everyone can agree on.

Instead of understanding their roles in all of this, and trying to


prevent it, many leaders remain focused on business as usual:
Roles and responsibilities are fixed. Quarterly reports are due.
Shareholders are watching. People have day jobs; they can’t be
guardians of the moral galaxy as well. In many ways, it’s not evil
but standard operating procedure that is the enemy of the good,
or at least of the not bad. The tools of yesterday might have
required malicious intent by those who wielded them to wreak
havoc, but today’s tools require no such thing.
If you give people the opportunity, the breathing room, to do the
right thing, they’ll do it happily. Creating that opportunity means
not only permitting but encouraging or requiring people to talk
the language of ethical nightmares. Make it a priority. Weave it
into existing enterprise strategy. Ensure that everyone in the
organization can tell you its nightmares and can rattle off five or
six things the company does at the everyday operational level to
make sure they don’t happen.

Leaders need to understand that developing a digital ethical risk


strategy is well within their capabilities. Most employees and
consumers want organizations to have a digital ethical risk
strategy. Management should not shy away.

is the author of Ethical


Machines (Harvard Business Review Press,
2022), the host of a podcast by the same name,
and the founder and CEO of Virtue, a digital
ethical risk consultancy. He advises the
government of Canada on federal AI
regulations and corporations on how to
implement digital ethical risk programs. He
has been a senior adviser to the Deloitte AI
Institute, served on Ernst & Young’s AI
Advisory Board, and volunteers as the chief
ethics officer to the nonprofit Government
Blockchain Association. Previously he was a
professor of philosophy at Colgate University
and the University of North Carolina,
Chapel Hill.

You might also like