Bias, Journalistic Endeavours, and The Risks of Artificial Intelligence
Bias, Journalistic Endeavours, and The Risks of Artificial Intelligence
Bias, Journalistic Endeavours, and The Risks of Artificial Intelligence
1. INTRODUCTION
1
R Girasa, ‘Artificial Intelligence as a Disruptive Technology’, Springer
Science and Business Media LLC, 2020; See also R Sil, A Roy, B Bhushan, A K
Mazumdar, ‘Artificial Intelligence and Machine Learning based Legal Application:
The State-of-the-Art and Future Research Trends’ (2019) International Conference on
Computing, Communication, and Intelligent Systems.
2
For a survey of AI use across European Newsrooms, see A Fanta, ‘Putting
Europe’s Robots on the Map: Automated Journalism in News Agencies’ (2017) Reuters
Institute Fellowship Paper, 9.
8
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 9
3
Can Artificial Intelligence Like IBM’s Watson Do Investigative Journalism?,
Fast Company, 12 November 2013, available at https:// www .fastcompany .com/
3021545/can-artificial-intelligence-like-ibms-watson-do-investigative-journalism,
accessed 25 February 2021.
4
J Burrell, ‘How the Machine “thinks”: Understanding Opacity in Machine
Learning Algorithms’ (2016) 3 Big Data & Society http://bds.sagepub.com/lookup/
doi/10.1177/2053951715622512 accessed 25 February 2021; M Butterworth, ‘The
ICO and Artificial Intelligence: The Role of Fairness in the GDPR Framework’ (2018)
34 Computer Law & Security Review 257; https://www.sciencedirect.com/science/
article/pii/S026736491830044X accessed 25 February 2021; A Datta, M C Tschantz
and A Datta, ‘Automated Experiments on Ad Privacy Settings: A Tale of Opacity,
Choice, and Discrimination’ (2015) Proceedings on Privacy Enhancing Technologies
92; N Diakopoulos, ‘Algorithmic Accountability: On the Investigation of Black
Boxes’ (New York: Tow Center for Digital Journalism, Columbia University, 2014)
http://towcenter.org/research/algorithmic-accountability-on-the-investigation-of-black
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
10 Artificial intelligence and the media
software (and possibly also hardware) systems …, that, given a complex goal, act
in the physical or digital dimension by perceiving their environment through data
acquisition, interpreting the collected structured or unstructured data, reasoning on
-boxes-2/accessed 25 February 2021; L Diver and B Schafer, ‘Opening the Black Box:
Petri Nets and Privacy by Design’ (2017) 31 International Review of Law, Computers
& Technology 68 https://doi-org.ezproxy.is.ed.ac.uk/10.1080/13600869.2017.1275123
accessed 25 February 2021; F Doshi-Velez et al., ‘Accountability of AI Under the
Law: The Role of Explanation’ [2017] arXiv:1711.01134 http://arxiv.org/abs/1711
.01134 accessed 25 February 2021; T Gillespie, ‘The Relevance of Algorithms’ (2014)
167 Media Technologies: Essays on Communication, Materiality, and Society; G
Kendall and G Wickham, Using Foucault’s Methods (London; Thousand Oaks, Calif:
Sage Publications, 1999); J A Kroll et al., ‘Accountable Algorithms’ (Rochester,
NY: Social Science Research Network, 2016) SSRN Scholarly Paper ID 2765268;
http://papers.ssrn.com/abstract=2765268 accessed 25 February 2021; f Pasquale, The
Black Box Society: The Secret Algorithms That Control Money and Information
(Cambridge: Harvard University Press, 2015); A D Selbst and J Powles, ‘Meaningful
Information and the Right to Explanation’ (2017) 7 International Data Privacy Law
233; S Wachter, B Mittelstadt and L Floridi, ‘Why a Right to Explanation of Automated
Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7
International Data Privacy Law 76.
5
See, e.g., M Broussard, N Diakopoulos, A L Guzman, R Abebe, M Dupagne,
and C-H Chuan, ‘Artificial Intelligence and “Journalism’ (2019) 96(3) Journalism &
Mass Communication Quarterly 673–95; N Diakopoulos, Automating the News: How
Algorithms are Rewriting the Media (Harvard University Press, 2019); A McStay,
Emotional AI: The Rise of Empathic Media (Sage, 2018); M Hansen, M Roca-Sales,
J M Keegan, and G King, Artificial Intelligence: Practice and Implications for
Journalism (Academic Commons, Columbia, 2017); On a wide range of aspects of
journalistic endeavours, see F Marconi, Newsmakers: Artificial Intelligence and the
Future of Journalism (Columbia University Press, 2020).
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 11
the knowledge, or processing the information, derived from this data and deciding
the best action(s) to take to achieve the given goal.6
6
High-Level Expert Group on Artificial Intelligence, High-Level Expert Group
on Artificial Intelligence, available at https://ec.europa.eu/digital-single-market/en/
news/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines, at 6,
accessed 16 February 2021.
7
For an example of this, see ‘A robot wrote this entire article. Are you scared yet,
human?’, The Guardian, 8 September 2020, available at https://www.theguardian.com/
commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3, accessed 16 February 2021.
8
For an introductory video of ‘Dreamwriter’, available at https://v.qq.com/x/page/
z071387ge88.html, accessed 16 February 2021.
9
For Prisma, see V Savov, ‘Prisma will make you fall in love with photo filters
all over again’, The Verge, 19 July 2016, available at https://www.theverge.com/
2016/7/19/12222112/prisma-art-photo-app accessed 25 February 2021. For ZAO, see
Z Doffman, ‘Chinese Deepfake App ZAO goes viral, privacy of millions “at risk”’,
Forbes, available at https://www.forbes.com/sites/zakdoffman/2019/09/02/chinese
-best-ever-deepfake-app-zao-sparks-hugefaceapp-such as-privacy-storm/, accessed 16
February 2021.
10
Deep learning is a subset of machine-learning. It uses deep neural networks, deep
belief networks, recurrent neural networks and/or convolutional neural networks for
machine-learning processes: it uses these architectures to model its predictive compu-
tational statistics. See, G Ras, M Gerven, and W Haselager, ‘Explanation Methods in
Deep Learning: Users, Values, Concerns and Challenges’, ArXiv 1803.07517 (2018).
11
M Burgess, ‘Google’s AI has written some amazingly mournful poetry’,
WIRED, 16 May 2016, available at https://www.wired.co.uk/article/google-artificial
-intelligence-poetry, accessed 16 February 2021.
12
T Karras et al., ‘Analyzing and Improving the Image Quality of Stylegan’, arXiv
preprint arXiv:1912.04958 (2019), demonstrations available at https://thispersondo
esnotexist.com accessed 25 February 2021. For Grover, see also rowanz, ‘Code for
Defending Against Neural Fake News’, available at https://github.com/rowanz/grover,
demonstrations available at https://thisarticledoesnotexist.com accessed 25 February
2021.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
12 Artificial intelligence and the media
13
M R Leiser and F Dechesne, ‘Governing Machine-learning Models: Challenging
the Personal Data Presumption’ (2020) 10(3) International Data Privacy Law 187–200.
14
‘Black box(es)’ is a semi-colloquial term used to describe opaque machine-learning
models, which are traditionally, although need not be, deep-learning based; See F
Pasquale, The Black Box Society (Harvard University Press, 2015).
15
Department of Culture, Media and Sport, ‘Disinformation and “Fake News”:
Final Report’, Eighth Report of Session 2017–19, 18 February 2019 at para 48, avail-
able at https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/
179105.htm#_ idTextAn%20chor005, accessed 17 February 2021. See also S Wachter
and B Mittelstadt, ‘A Right to Reasonable Inferences: Re-thinking Data Protection Law
in the Age of Big Data and AI’ (2019) Colum. Bus. L. Rev., 494 and L Edwards and
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 13
Data Protection Regulators have also issued guidelines about the obligation
to provide meaningful information about the logic involved in automated
decisions.16 With concerns that machines are dehumanizing decision-making,
both profiling and general automated decision-making about humans can only
take place when robust legal protections are in place, the principles of data
protection are adhered to, and data-subject rights can be upheld.17 These issues
manifest themselves in the general belief that AI challenges the set of legal
guarantees put in place in Europe to combat discrimination and ensure equal
treatment.18
These forms of machine-learning models also play an important role in
modern data-driven journalism. AI systems, trained for the purpose of news
creation, can search for independent input, and with zero or limited human
intervention. These systems can also operate without processing any per-
sonal data19 – the caveat that activates the European Union’s data protection
regime.20 An AI system that analyses crime data for hotspots, for example,
would not fall under the remit of the GDPR, unless the data subject is identi-
M Veale, ‘Slave to the Algorithm: Why a Right to an Explanation is Probably not the
Remedy you are Looking for’ (2017) Duke & Tech. Rev., 16, 18; M E Kaminski, ‘The
Right to Explanation, Explained’ (2019) 34 Berkeley Tech. LJ 18.
16
Information Commissioner’s Office, and the Alan Turing Institute, ‘Explaining
Decisions Made with AI’, available at https://ico.org.uk/media/for-organisations/
guide-to-data-protection/key-data-protection-themes/explaining-decisions-made-with
-artificial-intelligence-1-0.pdf, accessed 17 February 2021; see also Article 29 Working
Party Guidelines on Automated individual decision-making and Profiling for the pur-
poses of Regulation 2016/679, available at https://ec.europa.eu/newsroom/article29/
document.cfm?action=display&doc_id=49826, accessed 17 February 2021.
17
General Data Protection Regulation, Arts 13–21.
18
Algorithmic discrimination in Europe: challenges and opportunities for gender
equality and non-discrimination law, available at https://op.europa.eu/en/publication
-detail/-/publication/082f1dbc-821d-11eb-9ac9-01aa75ed71a1, accessed 11 June 2021.
19
GDPR, Art 4(1).
20
Regulation (EU) 2016/679 of the European Parliament and of the Council of
27 April 2016 on the protection of natural persons with regard to the processing of
personal data and on the free movement of such data, and repealing Directive 95/46/
EC (General Data Protection Regulation) (Text with EEA relevance); Directive (EU)
2016/680 of the European Parliament and of the Council of 27 April 2016 on the pro-
tection of natural persons with regard to the processing of personal data by competent
authorities for the purposes of the prevention, investigation, detection or prosecution
of criminal offences or the execution of criminal penalties, and on the free movement
of such data, and repealing Council Framework Decision 2008/977/JHA; Directive
2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning
the processing of personal data and the protection of privacy in the electronic commu-
nications sector (Directive on privacy and electronic communications).
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
14 Artificial intelligence and the media
fiable.21 Understandably, much of the work in this area has focused on histor-
ical biases that are embedded in the very training data that machine-learning
systems are built on.22 For example, ‘predictive policing’ is sold to financially
challenged law enforcement agencies (LEAs) as a ‘neutral’ method to coun-
teract unconscious biases, yet increasingly deploy data mining techniques to
predict, prevent, and investigate crime.23 However, research indicates that pre-
dictive policing can adversely impact minority and vulnerable communities.
For example, using historical data to assist in deployment can lead to more
arrests for nuisance crimes in neighbourhoods primarily populated by people
of colour. Algorithms employed to help determine criminal sentences in the
USA inadvertently discriminated against African Americans.24 Not only does
historical data risk discriminatory effects, but data integrity, too. Dörr and
Hollnbuchner posit that missing items can lead to bias in content generation.25
These effects are an artefact of the specific technology and will take place
regardless of any measures implemented to mitigate the machine’s bias.26
3. AI IN THE NEWSROOM
21
This is not without considerable controversy. Some have argued that all informa-
tion could theoretically relate to an individual – see N Purtova, ‘The Law of Everything.
Broad Concept of Personal Data and Future of EU Data Protection Law’ (2018) 10(1)
Law, Innovation and Technology 40–81. However, the very broad concept of personal
data could make the entire data protection regime unmanageable – see B J Koops, ‘The
Trouble with European Data Protection Law’ (2014) 4(4) International Data Privacy
Law 250–261.
22
N Mehrabi, F Morstatter, N Saxena, K Lerman and A Galstyan (2019) ‘A Survey
on Bias and Fairness in Machine Learning’ arXiv preprint arXiv:1908.09635.
23
The Correctional Offender Management Profiling for Alternative Sanctions
(COMPAS) algorithm was used to predict the risk ratings of offenders on a scale
from 1–10, with the latter being the highest risk. If the algorithm predicted a lower
score, this helped judges decide whether offenders could go on parole or probation:
see F. Zuiderveen Borgesius, ‘Discrimination, Artificial Intelligence, and Algorithmic
Decision-making’, 2018 Strasbourg: Council of Europe, Directorate General of
Democracy, at 15 and J Angwin et al., ‘Machine Bias: There’s Software Used Across
the Country to Predict Future Criminals. And it’s Biased Against Blacks’ (2016)
ProPublica 23 May 2015, available at https://www.propublica.org/article/machine-bias
-risk-assessments-in-criminal-sentencing, accessed 23 February 2021.
24
Angwin et al., ibid.
25
K N Dörr and K Hollnbuchner, ‘Ethical Challenges of Algorithmic Journalism’
(2017) 5(4) Digital Journalism 404–419, p 9.
26
A D Selbst, ‘Disparate Impact in Big Data Policing’ (February 25, 2017) 52
Georgia Law Review 109.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 15
Early examples of the use of NLG technology to automate journalism are mostly
confined to relatively short texts in limited domains but are nonetheless impressive
in terms of both quality and quantity. The text produced is generally indistin-
guishable from a text written by human writers and the number of text documents
generated substantially exceeds what is possible from manual editorial processes.31
27
Washington Post PR Blog, ‘The Washington Post experiments with auto-
mated storytelling to help power 2016 Rio Olympics coverage’, available at https://
www.washingtonpost.com/pr/wp/2016/08/05/the-washington-post-experiments-with
-automated-storytelling-to-help-power-2016-rio-olympics-coverage/, accessed 23
March 2021.
28
N Martin, ‘Did a Robot Write This? How AI Is Impacting Journalism’, Forbes
(Feb 8, 2019), available at https://www.forbes.com/sites/nicolemartin1/2019/02/08/
did-a-robot-write-this-how-ai-is-impacting-journalism/?sh=563c1e779575, accessed
23 March 2021; see J Keohane, ‘What news-writing bots mean for the future of jour-
nalism’ (2017) WIRED. February. https://www.wired.com/2017/02/robots-wrote-this
-story/accessed 25 February 2021.
29
L Moses, ‘The Washington Post’s robot reporter has published 850 articles in the
past year’, DigiDay (17 Sept 2017), available at https://digiday.com/media/washington
-posts-robot-reporter-published-500-articles-last-year/, accessed 23 March 2021.
30
A Graefe, Guide to Automated Journalism (Columbia University Academic
Commons, 2016).
31
D Caswell and K Dörr, ‘Automated Journalism 2.0: Event-driven Narratives’
(2017) Journalism Practice 2.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
16 Artificial intelligence and the media
Of course, crime is a favourite subject of the news media, with crime stories
estimated to make up between 12.5 and 40 per cent of local news.32 Chermak’s
analysis of six print and three broadcast media organizations revealed that
‘print media present nine crime stories a day, on average, and electronic
media four crime stories per day’.33 More recently, Curiel et al.’s social media
analysis revealed that an astounding 15 out of every 1,000 tweets were about
crime or fear of crime.34 Despite social media suffering from a strong bias
towards violent or sexual crimes, little correlation exists between social media
messages and crime. Social media is not useful for detecting trends in crime
but demonstrates insight into the amounts of fear about crime.
Given the cost effectiveness and processing capacity of modern comput-
ing, machine-learning is being rapidly deployed across newsrooms.35 Yet AI
systems used in newsrooms are often trained on crime reports. In AI discourse,
and especially machine-learning, ‘models’ are arrived at. The first step is
inputting (relevant) data into the machine, the data inputted largely depending
on what the machine would be used for or be doing ultimately. Secondly, the
machine identifies the relevant patterns, dots, differences, and especially simi-
larities in the data inputted to it. The third stage is model creation. This is based
on steps 1 and 2 – basically, the machine develops a model that can be used for
a task when data similar to step 1 is inputted, and so on. Machine-learning’s
predictive analytics is used to analyse historical and ‘real-time’ data to make
predictive decisions in, not only news reporting, but fact-checking the authen-
ticity of an unverifiable news story. As discussed in the previous section,
the accuracy of these predictive decisions increases with the amount of data
processed, including the training data upon which the AI system is modelled.
AI is already used to facilitate automated reporting of murders and other
forms of violent crime. For example, an AI system retrieves homicide data
directly from a coroner’s office, which in turn generates leads for reporters
32
J Grosholz and C Kubrin, ‘Crime in the News: How Crimes, Offenders and
victims are Portrayed in the Media’ (2007) 14 Journal of Criminal Justice and Popular
Culture 59–83.
33
S Chermak, ‘Crime in the News Media: A Refined Understanding of How
Crimes Become News’ in G Barak (ed), Media, Process, and the Social Construction
of Crime: Studies in Newsmaking Criminology (New York: Garland Publishing, 1994),
95–129.
34
R P Curiel, S Cresci, C I Muntean, and S Bishop ‘Crime and its Fear in Social
Media’ (2020) Palgrave Commun 6, 57 https://doi.org/10.1057/s41599-020-0430-7
accessed 25 February 2021.
35
Automated Journalism – AI Applications at New York Times, Reuters, and
Other Media Giants, available at https://emerj.com/ai-sector-overviews/automated
-journalism-applications/, accessed on 23 February 2021.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 17
to expand with details about the victim’s life and family.36 AI could be used
to match details about the deceased’s life with details from social media and
public registries. Incredibly, this has been touted as an example of automated
journalism operating without bias,37 ignoring the numerous instances that data
could contain errors and the bias in decision-making by the journalist who
chose what to report from the available data, and that any errors in reporting
could appear in the training data of other AI systems designed to search for
patterns – for example, crime trends.
The outcomes of any machine-learning system will be trained on a data set
for the purpose of creating a new output correlating with the set of inputs on its
own with zero or limited human intervention. Automated journalism operates
by either independently writing and publishing news articles without input
from a journalist or by ‘cooperating’ with a journalist who can be deputized
to supervise the process or provide input to improve the article. All methods
are dependent upon access to, and availability of the structured data needed
to generate news articles. Thus, any simple error in the coroner’s reporting
could theoretically infect the entire cycle of news reporting, undermining the
integrity of the system. Worryingly, the use of predictive policing by LEAs
could facilitate further unconscious bias in news reporting, which in turn
would affect real-world policing, which in turn would affect the outcomes
of predictive policing. The general advantages of this method are the speed
with which data can be collected and articles written, fewer errors in output,
and cost savings. Yet the quality of automated journalism depends on the
training data. However, not only is perfect training data never possible, human
error, prejudice, and misjudgement can enter into the journalistic lifecycle at
multiple points. Consequently, biases are introduced at any point in the news
delivery process, from the preliminary stages of data extraction, collection, and
pre-processing to the critical phases of news formulation, model building, and
reporting (Figure 1.1).
4. AI IN FACT-CHECKING
There are growing efforts by journalists, policy makers, and technology com-
panies towards finding effective, scalable responses to online disinformation
and false information. Whether by design or coincidence, false online content
36
N Lemelshtrich Latar, ‘The Robot Journalist in the Age of Social Physics: The
End of Human Journalism?’ in G Einav (ed), The New World of Transitioned Media
(New York, 2015), 74.
37
M Monti, ‘Automated Journalism and Freedom of Information: Ethical
and Juridical Problems Related to AI in the Press Field’ (2019) Opinio Juris in
Comparatione, 1.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
18 Artificial intelligence and the media
Figure 1.1 Therein lies the challenge in preventing bias in the newsroom
38
J Paschen, ‘Investigating the Emotional Appeal of Fake News Using Artificial
Intelligence and Human Contributions’ (2019) 29 Journal of Product & Brand
Management 223–233.
39
Sky News, ‘Coronavirus: 90 attacks on phone masts reported during UK’s lock-
down’, available at https://news.sky.com/story/coronavirus-90-attacks-on-phone-masts
-reported-during-uks-lockdown-11994401, accessed 16 February 2021.
40
The QAnon conspiracy theory and a stew of misinformation fuelled the insur-
rection at the Capitol, available at https://www.insider.com/capitol-riots-qanon-protest
-conspiracy-theory-washington-dc-protests-2021-1, accessed 17 February 2021.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 19
41
X Zhou and R Zafarani, ‘A Survey of Fake News: Fundamental Theories,
Detection Methods, and Opportunities’ (2020) 53(5) ACM Computing Surveys (CSUR)
1–40.
42
Available at https://medium.com/@edloginova/attention-in-nlp-734c6fa9d983,
accessed 23 February 2021.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
20 Artificial intelligence and the media
43
Facebook said it ‘disabled’ 1.2 billion fake accounts in the last three months of
2018 and 2.19 billion in the first quarter of 2019; see ‘Fake Facebook Accounts: The
Never-ending Battle against Bots’, Phsy Org, available at https://phys.org/news/2019
-05-fake-facebook-accounts-never-ending-bots.html, accessed 23 February 2021.
44
R Lever, ‘Fake Facebook accounts: the never-ending battle against bots’,
Phys Org (24 May 2019), available at https://phys.org/news/2019-05-fake-facebook
-accounts-never-ending-bots.html#:~:text=Facebook%20says%20its%20artificial
%20intelligence,before%20they%20can%20post%20misinformation, accessed 23
March 2021.
45
‘Facebook is using AI to Remove Fake News’, available at https://www.clickatell
.com/articles/digital-marketing/facebook-using-ai-remove-fake-news/, accessed 23
February 2021.
46
https:// w ww . brookings . edu/ r esearch/ h ow - to - deal - with - ai - enabled - dis
information/, accessed 16 February 2021.
47
P Bernal, ‘Facebook: Why Facebook Makes the Fake News Problem Inevitable’
(2018) 69 N Ir Legal Q 513.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 21
content is debatable: the user would never appreciate the corrective effect, or
the social shaming associated with the marketplace of ideas.
The legitimacy of collaborative AI to counter disinformation will largely
depend on the perceived impartiality and credibility of participating news
organizations.
We have not yet realized the true potential of artificial intelligence in com-
bating fake news. The future needs more sophisticated tools that can harness
the power of AI, big data, and machine learning to stop fake news making
ripples in the user world. Technically speaking, the main problems associated
with automated journalism, in terms of narrative and critical considerations,
surround their low quality. Yet the effects of AI and AI systems are not only
going to refashion human relationships but redistribute labour and creativity.
Accordingly, examination is required of these transformative effects on jour-
nalism and news production.
48
J Black and A D Murray, ‘Regulating AI and Machine Learning: Setting the
Regulatory Agenda’ (2019) 10(3) European Journal of Law and Technology.
49
Available at https://www.technologyreview.com/2021/03/11/1020600/facebook
-responsible-ai-misinformation/, accessed 12 March 2021.
50
AI HLEG Ethics Guidelines For Trustworthy AI, available at https://ec.europa
.eu/digital-singlemarket/en/news/ethics-guidelines-trustworthy-ai, accessed 27 March
2021.
51
Ibid., at 5.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
22 Artificial intelligence and the media
52
R Roscher, B Bohn, M F Duarte, J Garcke et al., ‘Explainable Machine Learning
for Scientific Insights and Discoveries’ doi: 10.1109/access.2020.2976199 accessed
27 March 2021, but compare to Y Bathaee ‘The Artificial Intelligence Black Box and
the Failure of Intent and Causation’ (Spring 2018) 31(2) Harvard Journal of Law &
Technology 906–919, 929.
53
Recital 58 GDPR.
54
R Roscher et al. (n 52), quoting G Montavon, W Samek and K R Müller,
‘Methods for Interpreting and Understanding Deep Neural Networks’ (2018) 73 Digital
Signal Processing 1–15; see also Leiser and Dechesne (n 13).
55
Montavon et al., ibid.
56
Arts 13(1)(f) 14(1)(g) and 15(1)(h) GDPR.
57
Similar to that of the ‘average consumer’ who is reasonably well informed, and
reasonably observant and circumspect, see CJEU in Severi, C-446/07, ECLI:EU:C:
2009:530, para 61 and the case-law cited.
58
See https://www.twobirds.com/~/media/pdfs/gdpr-pdfs/bird--bird--guide-to-the
-general-data-protection-regulation.pdf?la=en at 23; https://ico.org.uk/media/for-organi
sations/guide-to-the-general-data-protection-regulation-gdpr-1-0.pdf at 103, both acces-
sed 15 October 2021.
59
Rec 1 GDPR; see also Art 21(1) EU Charter: Any discrimination based on any
ground such as sex, race, colour, ethnic or social origin, genetic features, language, reli-
gion or belief, political or any other opinion, membership of a national minority, prop-
erty, birth, disability, age or sexual orientation shall be prohibited.
60
Rec 71 GDPR.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 23
61
Unfair Commercial Practices Directive 2005/29/EC, Art 5(3).
62
https://edps.europa.eu/data-protection/our-work/ethics_en, accessed 23 March
2021.
63
https:// f ra . europa . eu/ s ites/ d efault/ f iles/ f ra _ uploads/ f ra - 2020 - artificial
-intelligence_en.pdf, accessed 23 March 2021.
64
https://rm.coe.int/prems-107320-gbr-2018-compli-cahai-couv-texte-a4-bat-web/
1680a0c17a, accessed 23 March 2021.
65
https://www.pr.com/press-release/735528 accessed 25 February 2021.
66
European Commission. Report on the safety and liability implications of Artificial
Intelligence, the Internet of Things and robotics (2020) https:// ec .europa .eu/
info/
publications/commission-report-safety-and-liability-implications-ai-internet-things
-and-robotics-0_en; European Commission. White Paper. On Artificial Intelligence
– A European approach to excellence and trust (2020) https://ec.europa.eu/info/sites/
default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf; European
Commission. Liability for Artificial Intelligence and other emerging digital technol-
ogies (2019) https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197
-11ea-8c1f-01aa75ed71a1/language-en; European Parliament. European Parliament
resolution of 12 February 2019 on a comprehensive European industrial policy
on artificial intelligence and robotics (2019) https://www.europarl.europa.eu/doceo/
document/TA-8-2019-0081_EN.html; AI HLEG. Ethics Guidelines for Trustworthy AI.
(2019); https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy
-ai; AI HLEG. A definition of AI: Main capabilities and scientific disciplines
(2019) https://digital-strategy.ec.europa.eu/en/library/definition-artificial-intelligence
-main-capabilities-and-scientific-disciplines AI HLEG. Policy and Investment recom-
mendations for Trustworthy AI (2019) https://futurium.ec.europa.eu/en/european-ai
-alliance/open-library/policy-and-investment-recommendations-trustworthy-artificial
-intelligence; Council of Europe. Guidelines on Artificial Intelligence and Data
Protection (2019) https://rm.coe.int/guidelines-on-artificial-intelligence-and-data
-protection/168091f9d8; Council of Europe. Guidelines on the protection of individu-
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
24 Artificial intelligence and the media
als with regard to the processing of personal data in a world of big data (2019) https://
rm.coe.int/t-pd-2017-1-%20bigdataguidelines-en/16806f06d0; Council of Europe.
Report on Artificial Intelligence Artificial Intelligence and Data Protection: Challenges
and Possible Remedies (2018) https://rm.coe.int/report-on-artificial-%20intelligence
-artificial-intelligence-and-data-pro/16808e6012; EDPB Guidelines 3/2019 on pro-
cessing of personal data through video devices (2020) https://edpb.europa.eu/our-work
-tools/our-documents/guidelines/guidelines-32019-processing-personal-data-through
-video_en;.EDPS Opinion 3/2018. EDPS Opinion on online manipulation and per-
sonal data. (2018) https://edps.europa.eu/sites/edp/files/publication/18-03-19_online
_manipulation_en.pdf; ICO Guidance on the AI auditing framework. Draft guidance for
consultation (2020) https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/
ico-consultation-on-the-draft-ai-auditing-framework-guidance-for-organisations/; EU
Science Hub, ‘Artificial Intelligence: A European Perspective’, https://publications.jrc
.ec.europa.eu/repository/handle/JRC113826, all accessed 16 February 2021.
67
https://ec.europa.eu/jrc/communities/en/node/1286/document/eu-declaration
-cooperation-artificial-intelligence accessed 16 February 2021.
68
European Commission Brussels, ‘Communication from The Commission’,
25.4.2018; COM (2018) 237 final; ‘Artificial Intelligence for Europe’ {SWD (2018)
137 final}.
69
S Lewandowsky, L Smillie, D Garcia, R Hertwig, J Weatherall, S Egidy and
M Leiser ‘Technology and Democracy: Understanding the Influence of Online
Technologies on Political Behaviour and Decision-making’ Publications Office of
the European Union, 2020)available at https://publications.jrc.ec.europa.eu/repository/
handle/JRC122023, accessed 11 June 2021.
70
Ibid., 45.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 25
71
Proposal for a Regulation laying down harmonized rules on artificial intelli-
gence, https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down
-harmonised -rules -artificial -intelligence, accessed 05 June 2021; Communication
on Fostering a European approach to Artificial Intelligence, https://digital-strategy
.ec . europa . eu/ e n/ l ibrary/ c ommunication - fostering - european - approach - artificial
-intelligence, 05 June 2021.
72
Art 69: no mandatory obligations, but possible voluntary codes of conduct for AI
with specific transparency requirements.
73
Art 52: notify humans that they are interacting with an AI system unless this is
evident; notify humans that emotional recognition or biometric categorization systems
are applied to them.
74
Title III, Ch 2.
75
Title III, Annexes II and III; CE marking and Process (Title III, Ch 4, Art 49).
76
Title II, Art 5.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
26 Artificial intelligence and the media
the quantitative data available. This undermines and devalues the necessary
complex contextualization that human reasoning applies. With trust in legacy
media a rather fluid dynamic, and public attitudes to the credibility of social
media as a replacement to traditional news outlets, and both legal and political
fallout from the use of automated decision making, there is a general attitude
among Europeans that the use of AI should be transparent and discernible.
A recent Eurobarometer study focusing on AI found that 80 per cent of the
representative EU population sample think that they should be informed
when a digital service or mobile application uses AI.77 A recent representative
survey probed the German public’s attitudes towards use of online AI and use
of machine learning to exploit personal data for personalization of services.78
Attitudes towards personalization were found to be domain-dependent: most
people find personalization of political advertising and news sources unaccept-
able. The degree of moral outrage elicited by reports of immoral acts online
has been found to be considerably greater than for encounters in person or in
conventional media.79
The diffusion between a designer’s intention and the actual behaviour of
an AI system creates a ‘responsibility gap’ that is difficult to bridge with
traditional notions of responsibility80 and is subject to ongoing debate (e.g.,
the EU’s recent statement on artificial intelligence by the Group on Ethics
in Science and New Technologies).81 Traditionally, responsibility for any
journalistic error would attach to the newsroom through a variety of ethical
obligations, regulatory frameworks, and most importantly, tort (defamation/
libel) law. However, autonomous learning machines are fed data sources,
learn without supervision, and produce outputs that cannot be predicted. In
a normative sense, responsibility means being able to explain actions that
you were able to control. Someone will be responsible to the extent that they
know the circumstances and facts around decisions that they undertake. Thus,
responsibility can be ascribed to a principle of ‘control’. AI systems and
77
EU Barometer Public Opinion, available at https://ec.europa.eu/commfrontoffice/
publicopinion/index.cfm/Survey/getSurveyDetail/%20instruments/STANDARD/
surveyKy/2255, accessed 16 February 2021.
78
A Kozyreva, S Herzog, P Lorenz-Spreen, R Hertwig, and S Lewandowsky,
‘Artificial Intelligence in Online Environments: Representative Survey of Public
Attitudes in Germany’ (2020).
79
M J Crockett, ‘Moral Outrage in the Digital Age’ (2017) 1(11) Nature Human
Behaviour 769–771.
80
Lewandowsky et al. (n 69), 45.
81
Ibid., referring to the EU’s recent statement on artificial intelligence by the
Group on Ethics in Science and New Technologies, available at https://op.europa.eu/
en/publication-detail/-/publication/dfebe62e-4ce9-11e8-%09be1d-%2001aa75ed71a1,
accessed 23 March 2021.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 27
machine-learning models turn that principle on its head. At present, there are
AI systems in newsrooms that are able to decide on a course of action and to
act without human intervention. The rules on which they act are not fixed, but
are changed during the operation of the AI system, by the system itself. The
machine learns and produces a series of actions, where traditional ways of
attributing responsibility are not compatible with the control principle. No one
has enough control over the machine’s actions to be able to assume responsi-
bility for them. These constitute the ‘responsibility gap’.
7. EXPLAINABILITY
The second example of explainability refers to the obligation to make the inter-
nal logic of AI systems discernible to human beings.82 This does not require
disclosing the inner working of the logic, nor does it equate to algorithmic
transparency.83 The ethos behind explainability lies in distinguishing what
input produced what undesired effect in order to justly allocate responsibility
for that effect. As they may be asked to explain, any media or news organi-
zation using AI should be prepared to explain its workings and rationale and
should understand the reasons behind its output. When processing personal
data inside an AI system or when personal data appears in the training set, the
legal requirements for explainability come from a variety of hard and soft law
measures.
Under Article 5(1)(a) GDPR,84 a media organization that uses an AI system
will have to ensure that any personal data be processed in a ‘lawful’, ‘fair’ and
‘transparent’ way. The latter requires that information disclosure be discern-
ible by the data subject whose data is subjected to processing.85 Analysis of
personal information by AI systems could amount to (a) ‘processing of per-
sonal data’, and (b) ‘profiling’.86 Although these provisions keep data subjects
abreast as to the ‘generalities’ of data processing, Articles 13 and 14 provide
the legal basis for data subjects to be provided with ‘meaningful information
about the logic provided’ where relevant.87 The subject of an AI-generated
news report could exercise their rights against the media organization if acting
82
N Gill, P Hall, and N Schmidt ‘Proposed Guidelines for the Responsible Use of
Explainable Machine Learning’ (2020).
83
For a detailed account of why this is not feasible, see Leiser and Dechesne (n 13).
84
Regulation (EU) 2016/679 on the protection of natural persons with regard to
the processing of personal data and on the free movement of such data (General Data
Protection Regulation) 2016.
85
The ‘Transparency Principle’; see also GDPR, Rec 64.
86
GDPR, Art 4(2) and (4).
87
GDPR, Rec 39, 58 and 60.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
28 Artificial intelligence and the media
88
The extent of the ‘right to an explanation’ is hotly contested. For various
takes on the extent of the right, see Edwards and Veale (n 15), 16, 18; S Wachter, B
Mittelstadt and C Russell, ‘Counterfactual Explanations without Opening the Black
Box: Automated Decisions and the GDPR’ (2018) 31(2) Harvard Journal of Law
& Technology 841–887; F Doshi-Velez, M Kortz, R Budish, C Bavitz, S Gershman
and D O’Brien et al., ‘Accountability of AI Under the Law: The Role of Explanation’
(2019) Working Draft 1–21; A Selbst and J Powles, ‘Meaningful Information and the
Right to Explanation’ (2017) 7(4) International Data Privacy Law 233–243; Wachter,
Mittelstadt and Floridi (n 4) 76–99.
89
The Information Commissioner’s Office (ICO), Guidance on the AI Auditing
Framework: Draft Guidance for Consultation (2020); The ICO, Explaining Decisions
Made with AI Draft Guidance for Consultation Part 1, 2 and 3 (2019).
90
Ibid.
91
The Information Commissioner’s Office, Explaining Decisions Made with AI
Draft Guidance for Consultation Part 2 (2019), 4.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 29
92
H Kissinger, ‘How the Enlightenment Ends: Philosophically, Intellectually—
in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence’
(2018) The Atlantic, available at https://www.theatlantic.com/magazine/archive/2018/
06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/, accessed 26
March 2021.
93
M Brkan, ‘Do Algorithms Rule the World? Algorithmic Decision-making and
Data Protection in the Framework of the GDPR and Beyond’ (2019) 27(2) International
Journal of Law and Information Technology 91–121.
94
H Bloch-Wehba, ‘Access to Algorithms’ (2019) 88 Fordham L. Rev, 1265.
95
H Felzmann, E Fosch-Villaronga, C Lutz and A Tamò-Larrieux, ‘Towards
Transparency by Design for Artificial Intelligence’ (2020) Science and Engineering
Ethics 1–29.
96
A Koene, R Richardson, Y Hatada, H Webb, M Petel, D Reisman, C Machado,
J L Violette, and C Clifton, ‘A Governance Framework for Algorithmic Accountability
and Transparency’, 2018. EPRS/2018/STOA/SER/18/002, 2018.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
30 Artificial intelligence and the media
If a DPIA discovers that such risk exists and cannot be mitigated, the AI
operator is obliged to consult the data protection regulator.98 The most severe
regulatory action against an operator who cannot demonstrate ability to
comply is ‘temporary or definitive limitation including a ban on processing’,99
but it has certain time limits.100 The EU Commission has proposed introducing
a requirement to undertake an overall AI impact assessment to ensure that any
regulatory intervention is proportionate, and distinguishing those being ‘high
risk’ from the remainder. Two cumulative criteria clarify when and how AI
should be specified as bearing high risk: (1) a sector where significant risks
can be expected to occur; and (2) application in such a manner, when signif-
icant risks are likely to occur.101 In addition to this general category, some
types of activity are considered as always bearing high risk, e.g., applications
for recruitment and other situations that can have an impact on the rights of
workers, use for purposes of remote biometric identification.102
9. CONCLUSION
The world is abuzz with the prospects and promises of AI and its ability to
process large datasets accurately in order to derive predictive outcomes. Four
factors have ensured AI’s success as we see today in almost every sector.
These include: ‘exponential increased computer processor capabilities; emer-
gence of global digital networks; advances in distributed computing (hardware
and software); and especially the emergence of Big Data’.103 The indescribable
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
Bias, journalistic endeavours, and the risks of artificial intelligence 31
amounts of available personal data for fuelling AI, increased processing power
and access to cheaper and greater storage capacity have ensured advances in
creating ML-models.104 AI holds great promise and utility for news media and
fact-checkers. However, as is widely acknowledged, many machine-learning
applications function as ‘black-boxes’ that have been built using vast amounts
of ‘historical data’.105 Predictive profiling offers a unique approach to threat
mitigation that begins from the point of view of the aggressor/adversary and
is based on an actual adversary’s methods of operation, their modus operandi.
This method is applicable to securing virtually any environment and to meeting
any set of security requirements. The post-crime orientation of criminal justice
is increasingly overshadowed by the pre-crime logic of security. Frameworks
for preventing crime are not as concerned with gathering evidence, prosecu-
tion, conviction, and subsequent punishment as in targeting and managing
through disruption, restriction, and incapacitation those individuals and groups
considered to be a risk. Using unexplainable, unaccountable, irresponsible AI
will end the system of checks and balances. Worryingly, few insights can be
derived about the internal logic of AI systems.106 The absence of understanding
the logic behind machine-learning is grave, not only from a journalistic integ-
rity perspective (given the necessity of warranting algorithmic transparency)
but also a broader societal perspective.107
The consequences of using biased training data in journalistic endeavours
that rely on machine-learning is a prime example. Subjects of a news article
could face the social stigmatization of being labelled a ‘suspect’ or even a
‘criminal’.108 Under the present system of checks and balances, the burden of
104
‘[I]t is data, in many cases personal data, that fuels these systems, enabling
them to learn and become intelligent’ (see https://iapp.org/media/pdf/resource_center/
Datatilsynet_AI%20and%20Privacy_Report.pdf, at 5, accessed 15 October 2021).
105
Wired, Machine Learning and Cognitive Systems: The Next Evolution of
Enterprise Intelligence (Part I) (2020), available at https://www.wired.com/insights/
2014/07/machine-learning-cognitive-systems-next-evolution-enterprise-intelligence
-part/, accessed 16 February 2021; Wired, Location Intelligence Gives Businesses
a Leg Up Thanks to Real-Time AI (2020), available at https:// www .wired .com/
wiredinsider/2019/06/location-intelligence-gives-businesses-leg-thanks-real-time-ai/,
accessed 16 February 2021.
106
J Brownlee, ‘What is Deep Learning?’ (2019), available at https://machinelearn
ingmastery.com/what-is-deep-learning/, accessed 16 February 2021; see also N Gill,
P Hall and N Schmidt, Proposed Guidelines for the Responsible Use of Explainable
Machine Learning (2020).
107
G Ras, M Gerven and W Haselager, ‘Explanation Methods in Deep Learning:
Users, Values, Concerns and Challenges’ ArXiv 1803.07517 (2018).
108
G Sinha, ‘To Suspect or Not to Suspect: Analysing the Pressure on Banks to be
“Policemen”’ (2014) 15(1) Journal of Banking Regulation 75–86; SAS Institute, What
is Next-generation AML? The Fight against Financial Crime Fortified with Robotics,
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account
32 Artificial intelligence and the media
proof is on the person discriminated against to show that this was the result of
(a) bad data and/or (b) an algorithm, and/or an automated decision. This would
require a person subjected to a decision to have access to the model used by the
newsroom, the training data, and the raw data from, for example, the coroner’s
office. Therefore, automated journalism and AI systems used in newsrooms
are an unchecked power with indeterminable consequences for society. This
can lead to further discrimination, catastrophic economic and social losses, as
well as loss of reputation and, in some cases, infringement of civil liberties. It
is important that everyone affiliated with media production and consumption,
including readers, have some kind of understanding of what artificial intelli-
gence actually is and how it operates. This fundamental understanding will not
only shape how we use it but enable us to use it in a way that actually serves
society, rather than just the technology.
M. R. Leiser - 9781839109973
Downloaded from https://www.elgaronline.com/ at 02/26/2024 11:04:36PM
via communal account