Deepfakes and The New Disinformation War
Deepfakes and The New Disinformation War
Deepfakes and The New Disinformation War
A
picture may be worth a thousand words, but there is nothing
that persuades quite like an audio or video recording of an
event. At a time when partisans can barely agree on facts,
such persuasiveness might seem as if it could bring a welcome clarity.
Audio and video recordings allow people to become firsthand witnesses
of an event, sparing them the need to decide whether to trust someone
else’s account of it. And thanks to smartphones, which make it easy to
capture audio and video content, and social media platforms, which
allow that content to be shared and consumed, people today can rely
on their own eyes and ears to an unprecedented degree.
Therein lies a great danger. Imagine a video depicting the Israeli
prime minister in private conversation with a colleague, seemingly
revealing a plan to carry out a series of political assassinations in Tehran.
Or an audio clip of Iranian officials planning a covert operation to kill
Sunni leaders in a particular province of Iraq. Or a video showing an
American general in Afghanistan burning a Koran. In a world already
primed for violence, such recordings would have a powerful potential
for incitement. Now imagine that these recordings could be faked using
tools available to almost anyone with a laptop and access to the Internet—
and that the resulting fakes are so convincing that they are impossible
to distinguish from the real thing.
Advances in digital technology could soon make this nightmare a
reality. Thanks to the rise of “deepfakes”—highly realistic and difficult-
ROBERT CHESNEY is James A. Baker III Chair and Director of the Robert Strauss Center
for International Security and Law at the University of Texas at Austin.
DANIELLE CITRON is Morton and Sophia Macht Professor of Law at the University of
Maryland and Affiliate Fellow at the Yale Information Society Project.
148 F O R E I G N A F FA I R S
Deepfakes and the New Disinformation War
True lies: stills of a deepfake video of Barack Obama created by researchers in 2017
voice to disease. But deepfakes can and will be used for darker purposes,
as well. Users have already employed deepfake technology to insert
people’s faces into pornography without their consent or knowledge,
and the growing ease of making fake audio and video content will
create ample opportunities for blackmail, intimidation, and sabotage.
The most frightening applications of deepfake technology, however, may
well be in the realms of politics and international affairs. There, deep-
fakes may be used to create unusually effective lies capable of inciting
violence, discrediting leaders and institutions, or even tipping elections.
Deepfakes have the potential to be especially destructive because
they are arriving at a time when it already is becoming harder to
separate fact from fiction. For much of the twentieth century, maga-
zines, newspapers, and television broadcasters managed the flow of
information to the public. Journalists established rigorous professional
standards to control the quality of news, and the relatively small
number of mass media outlets meant that only a limited number of
individuals and organizations could distribute information widely.
U N I V E R S I T Y O F WA S H I N G T O N
Over the last decade, however, more and more people have begun to
get their information from social media platforms, such as Facebook
and Twitter, which depend on a vast array of users to generate rela-
tively unfiltered content. Users tend to curate their experiences so
that they mostly encounter perspectives they already agree with (a
tendency heightened by the platforms’ algorithms), turning their social
media feeds into echo chambers. These platforms are also susceptible
to so-called information cascades, whereby people pass along informa-
tion shared by others without bothering to check if it is true, making
it appear more credible in the process. The end result is that falsehoods
can spread faster than ever before.
These dynamics will make social media fertile ground for circulat-
ing deepfakes, with potentially explosive implications for politics.
Russia’s attempt to influence the 2016 U.S. presidential election—
spreading divisive and politically inflammatory messages on Face-
book and Twitter—already demonstrated how easily disinformation
can be injected into the social media bloodstream. The deepfakes of
tomorrow will be more vivid and realistic and thus more shareable
than the fake news of 2016. And because people are especially prone
to sharing negative and novel information, the more salacious the
deepfakes, the better.
DEMOCRATIZING FRAUD
The use of fraud, forgery, and other forms of deception to influence
politics is nothing new, of course. When the USS Maine exploded in
Havana Harbor in 1898, American tabloids used misleading accounts
of the incident to incite the public toward war with Spain. The anti-
Semitic tract Protocols of the Elders of Zion, which described a fictional
Jewish conspiracy, circulated widely during the first half of the twentieth
century. More recently, technologies such as Photoshop have made
doctoring images as easy as forging text. What makes deepfakes un-
precedented is their combination of quality, applicability to persuasive
formats such as audio and video, and resistance to detection. And as
deepfake technology spreads, an ever-increasing number of actors will
be able to convincingly manipulate audio and video content in a way
that once was restricted to Hollywood studios or the most well-funded
intelligence agencies.
Deepfakes will be particularly useful to nonstate actors, such as
insurgent groups and terrorist organizations, which have histori-
cally lacked the resources to make and disseminate fraudulent yet
credible audio or video content. These groups will be able to de-
pict their adversaries—including government officials—spouting
inflammatory words or engaging in provocative actions, with the
specific content carefully chosen to maximize the galvanizing im-
pact on their target audiences. An affiliate of the Islamic State (or
150 F O R E I G N A F FA I R S
Deepfakes and the New Disinformation War
DEEP FIX
There is no silver bullet for countering deepfakes. There are several
legal and technological approaches—some already existing, others
likely to emerge—that can help mitigate the threat. But none will
overcome the problem altogether. Instead of full solutions, the rise of
deepfakes calls for resilience.
Three technological approaches deserve special attention. The first
relates to forensic technology, or the detection of forgeries through
technical means. Just as researchers are putting a great deal of time
and effort into creating credible fakes, so, too, are they developing
methods of enhanced detection. In June 2018, computer scientists at
Dartmouth and the University at Albany, SUNY, announced that they
had created a program that detects deepfakes by looking for abnormal
patterns of eyelid movement when the subject of a video blinks. In the
deepfakes arms race, however, such advances serve only to inform
the next wave of innovation. In the future, GANs will be fed training
videos that include examples of normal blinking. And even if ex-
tremely capable detection algorithms emerge, the speed with which
deepfakes can circulate on social media will make debunking them an
uphill battle. By the time the forensic alarm bell rings, the damage
may already be done.
A second technological remedy involves authenticating content
before it ever spreads—an approach sometimes referred to as a “digital
provenance” solution. Companies such as Truepic are developing
ways to digitally watermark audio, photo, and video content at the
moment of its creation, using metadata that can be logged immutably
on a distributed ledger, or blockchain. In other words, one could effec-
152 F O R E I G N A F FA I R S
Deepfakes and the New Disinformation War
154 F O R E I G N A F FA I R S
Deepfakes and the New Disinformation War