0% found this document useful (0 votes)
68 views3 pages

What Are Deepfakes Used For?

Deepfakes are primarily used to create nonconsensual pornography, targeting mostly women. They also enable new forms of bullying and fraud such as CEO scams. Governments worry deepfakes could undermine democracy by creating fake videos of politicians. The biggest threat is that deepfakes provide plausible deniability for wrongdoing since anyone can claim a video is just a deepfake. Several US states have laws against deepfake pornography but enforcement is difficult globally since few other countries have specific laws against deceptive deepfakes. Tech companies and researchers are working on detection methods but deepfake creators often stay ahead of these efforts.

Uploaded by

edward
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views3 pages

What Are Deepfakes Used For?

Deepfakes are primarily used to create nonconsensual pornography, targeting mostly women. They also enable new forms of bullying and fraud such as CEO scams. Governments worry deepfakes could undermine democracy by creating fake videos of politicians. The biggest threat is that deepfakes provide plausible deniability for wrongdoing since anyone can claim a video is just a deepfake. Several US states have laws against deepfake pornography but enforcement is difficult globally since few other countries have specific laws against deceptive deepfakes. Tech companies and researchers are working on detection methods but deepfake creators often stay ahead of these efforts.

Uploaded by

edward
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

What are deepfakes 

used for?
The clearest threat that deepfakes pose right now is to women—
nonconsensual pornography accounts for 96 percent of deepfakes currently
deployed on the Internet. Most target celebrities, but there are an increasing
number of reports of deepfakes being used to create fake revenge porn,
says Henry Ajder, who is head of research at the detection firm Deeptrace, in
Amsterdam.

But women won’t be the sole targets of bullying. Deepfakes may well enable
bullying more generally, whether in schools or workplaces, as anyone can
place people into ridiculous, dangerous, or compromising scenarios.

Corporations worry about the role deepfakes could play in supercharging


scams. There have been unconfirmed reports of deepfake audio being used
in CEO scams to swindle employees into sending money to fraudsters.
Extortion could become a major use case. Identity fraud was the top
worry regarding deepfakes for more than three-quarters of respondents to a
cybersecurity industry poll by the biometric firm iProov. Respondents’ chief
concerns were that deepfakes would be used to make fraudulent online
payments and hack into personal banking services.

For governments, the bigger fear is that deepfakes pose a danger to


democracy. If you can make a female celebrity appear in a porn video, you can
do the same to a politician running for reelection. In 2018, a video surfaced of
João Doria, the governor of São Paulo, Brazil, who is married, participating in
an orgy. He insisted it was a deepfake. There have been other examples. In
2018, the president of Gabon, Ali Bongo, who was long presumed
unwell, surfaced on a suspicious video to reassure the population, sparking an
attempted coup.

The ambiguity around these unconfirmed cases points to the biggest danger of
deepfakes, whatever its current capabilities: the liar’s dividend, which is a
fancy way of saying that the very existence of deepfakes provides cover for
anyone to do anything they want, because they can dismiss any evidence of
wrongdoing as a deepfake. It’s one-size-fits-all plausible deniability. “That is
something you are absolutely starting to see: that liar’s dividend being used as
a way to get out of trouble,” says Farid.

How do we stop malicious deepfakes?


Several U.S. laws regarding deepfakes have taken effect over the past year.
States are introducing bills to criminalize deepfake pornography and prohibit
the use of deepfakes in the context of an election. Texas, Virginia, and
California have criminalized deepfake porn, and in December, the president
signed the first federal law as part of the National Defense Authorization Act.
But these new laws only help when a perpetrator lives in one of those
jurisdictions.

Outside the United States, however, the only countries taking specific actions
to prohibit deepfake deception are China and South Korea. In the United
Kingdom, the law commission is currently reviewing existing laws for revenge
porn with an eye to address different ways of creating deepfakes. However, the
European Union doesn’t appear to see this as an imminent issue compared
with other kinds of online misinformation.

So while the United States is leading the pack, there’s little evidence that the
laws being put forward are enforceable or have the correct emphasis.

And while many research labs have developed novel ways to identify and
detect manipulated videos—incorporating watermarks or a blockchain, for
example—it’s hard to make deepfake detectors that are not immediately
gamed in order to create more convincing deepfakes.

Still, tech companies are trying. Facebook recruited researchers from


Berkeley, Oxford, and other institutions to build a deepfake detector and help
it enforce its new ban. Twitter also made big changes to its policies, going one
step further and reportedly planning ways to tag any deepfakes that are not
removed outright. And YouTube reiterated in February that it will not allow
deepfake videos related to the U.S. election, voting procedures, or the 2020
U.S. census.
But what about deepfakes outside these walled gardens? Two programs,
called Reality Defender and Deeptrace, aim to keep deepfakes out of your life.
Deeptrace works on an API that will act like a hybrid antivirus/spam filter,
prescreening incoming media and diverting obvious manipulations to a
quarantine zone, much like how Gmail automatically diverts spam before it
reaches your inbox. Reality Defender, a platform under construction by the
company AI Foundation, similarly hopes to tag and bag manipulated images
and video before they can do any damage. “We think it’s really unfair to put
the responsibility of authenticating media on the individual,” says Adjer.

You might also like