DISINFORMATION

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

STUDENT NUMBER: 21058479

Disinformation as ethical issues in Artificial Intelligence (AI)


Introduction
In 1951 when Christopher Strachey launched the AI-driven checker programs at the
University of Manchester in England, questions around the implications of AI for human-
mimicking were recurrent (Hunt, 2021). Depending on the context and perspective from
which AI is considered, AI can be considered as a blessing to humankind or one of the
biggest threats to humankind. Whereas AI has enormous economic benefits, the ethical issues
are almost as numerous and significant as the benefits and opportunities, one of which is in
the area of disinformation and fake news. A report by Cassauwers (2019)states that fake
news and disinformation is the biggest threat facing journalism and the media today, creating
distrust towards the media, politics, and established institutions around the world. In the
context of disinformation and fake news, Taddeo, McCutcheon and Floridi (2019) consider
AI as a double-edged sword; on one hand, it can be used to fight disinformation, while on the
other hand, it can be used to fan disinformation.
Mimicking a person, for instance, Barack Obama, in the past might require physically
imitating his voice, body disposition and gesture. Even with the best effort, the tendency of
mimicking him accurately is quite low. With advancements in technology such as AI,
mimicking a person has become quite easy, and handy. Using AI-enabled online programs,
mimickers can record a sentence and listen to their recordings in a famous person's voice
(Taddeo, McCutcheon and Floridi, 2019). Such programs, called deep fakes; that adapt audio,
pictures, and videos to make people say and do things they never did, are becoming more
rampant and accessible on the internet.
In this paper, the focus is on reviewing cases of disinformation resulting from the use and
adoption of AI. The paper considers these cases as ethical issues in Artificial Intelligence. A
section on solutions is also included towards the tail-end of the paper in order to have a
balanced argument.

Trump-Clinton Contest in 2016 U.S Presidential Election


The 2016 U.S presidential election served as a tipping point on fake news and disinformation,
the two main parties; GOP and DNC, utilized technology such as social media extensively
(Walch, 2019). However, the GOP, on whose ticket Former President Donald Trump ran for
presidency seem to have leveraged technology more to its advantage. Cambridge Analytica,
a data science firm rolled out an extensive advertising campaign to influence and persuade
electorates based on their individual psychology. Using micro-targeting, big data, and
artificial intelligence, the company aimed the advertising messages at influencing people's
emotions. Voters received different advertising messages based on their susceptibilities and
dispositions about different topics. For instance, voters perceived from analytics to be
paranoid received fear-inducing advertising messages, while those with conservative
predisposition received ads focused on tradition and community.
Using artificial intelligence, real-time data on voters from their behaviour and activities on
social media, alongside their internet blueprint were used to develop a behavioural
psychographic. However, the technologies used, specifically AI, is not the problem in itself,
rather the insincerity of the political messages sent to the electorates with the intent of
swaying their mindset. According to Walch (2019), this campaign was immensely beneficial
to Former President Trump dues to his flexible and emotions-lade campaign promises. The
technology was to create varied versions and positions on an argument or topic, while the
emotional analytics of the recipients was used to decide what message an electorate would
receive on social media.
Furthermore, there are speculations and reports about Russia's interference in the U.S 2016
presidential election, in favour of Trump and against Hillary Clinton. According to the report,
Russia employed Artificial Intelligence to manipulate public opinion in favour of Former
President Trump. AI-powered bots on social media were used to spread disinformation about
Hillary Clinton and her role during her time as then-Secretary of State, information that Zulli
(2018) claims are politically divisive and debatable. Bots accounts used on social media
platforms such as Twitter and Facebook are autonomous accounts programmed to
aggressively spread one-sided political narratives to sway public support and spread
disinformation. The 2016 U.S presidential election witnessed a heavy influence of bot
accounts. Zulli (2018) claims that bots were used to highlight and magnify negative or fake
information about a candidate to a demographic group that is likely to vote for them, either
with the intent to discourage them from turning out on election day, or voting for the rival
candidate. Both of these scenarios played out during the election, where negative information
about Hillary Clinton, most of whom analysts later flagged as fake news, were released on
social media a few weeks before the election, with follow up messages and advertorials.
Consequently, there was low voters turnout in the key Democratic States such as Detroit.
Although the argument on the role of Cambridge Analytica and the Russians in the
electioneering in the U.S 2016 election remains debatable. What is constant and generally
accepted is the role that artificial intelligence played in spreading disinformation during the
campaign.

Brexit Referendum in 2017


United Kingdom's exit from the European Union (EU) is considered one of the biggest
political upheavals of 2017 (Bontridder and Poullet, 2021). Starting as a silent agitation and
debate, social media soon became instrumental to the amplification of the campaign for an
exit from the European Union (EU). Both parties, for and against the exit, utilized social
media to disseminate information to their followers. However, reports cite that artificial
intelligence-power bots were used on social media platforms to amplify this information, and
sometimes to spread fake news about the topic.
AI, utilizing behavioural data about social media, target users with information that aligns
with their believes and behaviours, with the aim of influencing their decision to either vote
for or against Brexit. Similarly, fake news and untrue information were shared on social
media through bot accounts, who quickly spread this news to its followers, who are mostly
human (real accounts). With artificial intelligence, these bot accounts are able to mimic a
famous person or organization and share information or news purporting to be from the
person or organization (Bontridder and Poullet, 2021). The fake news and disinformation,
that was spread and facilitated with the use of artificial intelligence are argued as instrumental
to the final outcome of the election. For instance, Greene, Nash and Murphy (2021)opine that
younger generations of Britain were most influenced by fake news in the buildup to the
election. Summarily, the implication is that their choice at the election was influenced by
information that might not be correct in its entirety. Based on this, it can be inferred that the
outcome of the election might be different if technologies such as social media and artificial
intelligence were not unethically used to sway the audience.

Automated content generation- case of OpenAI’s GPT-3


Most of the popular cases of artificial intelligence as a tool for disinformation involves
altering a piece of information or manipulating an audio, video, or picture. However, the
sophistication of artificial intelligence has gone to the extent of the technology being able to
create its own content and information, with no or minimal human input (Nast, 2021).
OpenAI’s GPT-3 projects is a typical example of such a scenario. OpenAI created a
powerful AI algorithm that is capable of generating coherent text in June 2020, a tool that can
create content that is both grammatically and semantically correct. Based on the capability of
the tool, the organization warned that GPT-3 is capable of becoming a weapon of
disinformation.
Six months after launching the tool, GPT-3 was used to generate disinformation, including a
story around a false narrative, news that is untrue and focused on pushing a certain
perspective, and tweets focused on spreading particular disinformation. Some generated
content by the tool is; “I don't think it's a coincidence that climate change is the new global
warming,” “They can't talk about temperature increases because they're no longer
happening.” A second labelled climate change “the new communism—an ideology based on
a false science that cannot be questioned.”
All three contents (tweets) are focused on creating scepticism about climate change,
spreading untrue information, and attempting to sway the opinion of people on social media
against climate change. According to Ben Buchanan (Nast, 2021), “With a little bit of human
curation, GPT-3 is quite effective”. Going further, the researcher opines that GPT-3 or other
similar AI language algorithms could prove effective for automatically generating content
and sharing this disinformation on platforms like social media, which Ben termed as one-to-
many disinformation.
Experimenting with the GPT-3 further, researchers found that the tool could sway readers'
opinions on issues as critical as international diplomacy. As an example, the tool was used to
autogenerate content about the withdrawal of U.S troops from Afghanistan and US sanctions
on China. In both experiments, it was concluded that the contents generated with the tool
were so grammatically and semantically accurate, and compelling enough for the participants
to believe them to be true. For instance, after seeing posts generated by GPT-3 opposing
China sanctions, the proportion of respondents who said they were against such a policy
doubled.
Although machines such as GTP-3 does not have the level of human understanding of
language, the large and growing data available to them makes them almost as smart as a
human. The GPT-3 tool was created by feeding large amounts of text scraped from sources
such as Wikipedia and Reddit. Using this data to train the models, the tools can mimic
humans by creating coherent and meaningful information with no or minimal human input.
However, regardless of the grammatical and semantic correctness of the content created by
such tools, they remain untrue, malicious, and manipulative. Such a tool demonstrate the
capability of artificial intelligence as a tool for disinformation.
Nancy Pelosi’s slurring video
In the midst of the heated political atmosphere in the U.S in 2019, a slurry video of Nancy
Pelosi became one of the symbols of disinformation. A video by the Speaker at a press
conference was altered to make her sound drunk and slurring, thereby creating negative
impressions about her with the media and among Americans (Harwell, 2019). The distorted
video within a short period of time trended on social media platforms like Twitter, YouTube
and Facebook, with notable politicians including the then-president Trump sharing and
commenting on it.
One version of the video, posted by the conservative Facebook page Politics Watchdog had
over 2 million views within 24hrs, with more than 45,000 shares, and over 23,000 comments.
Most of the comments were negative comments, calling her “drunk” and a “babbling mess ”.
To worsen the situation, Trump tweeted a selectively edited supercut, taken from Fox News,
focused on moments where she briefly paused or stumbled (Harwell, 2019).
An analysis of the distorted Center for American Progress video by Washington Post and
outside researchers shows that the video has been manipulated, reducing the speed by about
75 per cent of the original speed. The audio also appeared to have been altered to modify her
pitch, to more closely resemble the sound of her natural speech.

AI as a solution disinformation
Despite the ethical issues of AI as a tool for disinformation, recall that Taddeo, McCutcheon
and Floridi (2019)describes AI as a two-edged sword, this implies that it can also be used to
curb the problems. AI algorithms are deployed to detect bot accounts on social media
platforms such as Twitter, thus used to curb the spread of fake news. Furthermore, AI is
adopted in other areas such as text analytics and image processing, both of which are
effective in identifying real from fake information. For instance, using AI, Nancy Pelosi’s
slurring video was analyzed and detected to be false and misleading. Other cases of
disinformation have been detected using AI-powered tools and algorithms. All of these are
pointers to the potentials of AI in curbing disinformation and the spread of fake news.

Conclusion
Ethical issues in AI are gaining more attention and people are paying more attention simply
because this technology is proliferating our day to day lives. Not only that, AI as a
technology is seeing more practical implementations, that affect directly have an impact on
people's lives thus people and governments are becoming more concerned about ethical
issues. Lack of centralized regulation on the use of AI is one of the most prevalent issues in
curbing the adverse use of AI.
Disinformation is becoming more prominent, and the impact cannot be overemphasized. A
notable example is the US 2016 presidential election, wherein the US congress there were
interferences by Russia and Cambridge Analytica. Both of them adopted technology like AI,
bots, and social media to influence the election process and to sway the opinion of the voters
to suit their agenda. If AI can be used for disinformation to disrupt the most civilized
democracy in the world, the damage it can wreck cannot be accurately estimated.
References

Bontridder, N. and Poullet, Y., 2021. The role of artificial intelligence in


disinformation. Data & Policy, 3.

Cassauwers, T., 2019. Can artificial intelligence help end fake news?. [online] Horizon
Magazine. Available at: <https://ec.europa.eu/research-and-innovation/en/horizon-
magazine/can-artificial-intelligence-help-end-fake-news> [Accessed 27 December
2021].

Greene, C., Nash, R. and Murphy, G., 2021. Misremembering Brexit: partisan bias and
individual predictors of false memories for fake news stories among Brexit
voters. Memory, 29(5), pp.587-604.

Harwell, D., 2019. Faked Pelosi videos, slowed to make her appear drunk, spread across
social media. [online] Available at:
<https://www.washingtonpost.com/technology/2019/05/23/faked-pelosi-videos-slowed-
make-her-appear-drunk-spread-across-social-media/> [Accessed 27 December 2021].

Hunt, S., 2021. AI Ethics: Ethical Dilemmas of Artificial Intelligence | Datamation. [online]
Datamation. Available at: <https://www.datamation.com/artificial-intelligence/the-
ethics-of-artificial-intelligence-ai/> [Accessed 27 December 2021].

Nast, C., 2021. AI Can Write Disinformation Now—and Dupe Human Readers. [online]
Wired. Available at: <https://www.wired.com/story/ai-write-disinformation-dupe-
human-readers/> [Accessed 27 December 2021].

Taddeo, M., McCutcheon, T. and Floridi, L., 2019. Trusting Artificial Intelligence in
Cybersecurity is a Double-edged Sword. SSRN Electronic Journal,.

Walch, K., 2019. Ethical Concerns of AI. [online] Forbes. Available at:


<https://www.forbes.com/sites/cognitiveworld/2020/12/29/ethical-concerns-of-ai/>
[Accessed 27 December 2021].

Zulli, D., 2018. The Changing Norms of Gendered News Coverage: Hillary Clinton in the
New York Times, 1969–2016. Politics & Gender, 15(03), pp.599-621.

You might also like