0% found this document useful (0 votes)
223 views31 pages

Brkan, Maja - 'Do Algorithms Rule The World - Algorithmic Decision-Making and Data Protection in The Framework of The GDPR and Beyond'

This document analyzes rules regarding algorithmic decision-making under the General Data Protection Regulation (GDPR) and explores how to ensure transparency of such decisions. It discusses how the GDPR and Directive on Data Protection in Criminal Matters impose limitations on automated individual decision-making but also contain numerous exceptions allowing such decisions. It raises questions about what exactly needs to be revealed to individuals about algorithmic decisions and how algorithm-based decisions can be explained.

Uploaded by

DylanOSullivan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
223 views31 pages

Brkan, Maja - 'Do Algorithms Rule The World - Algorithmic Decision-Making and Data Protection in The Framework of The GDPR and Beyond'

This document analyzes rules regarding algorithmic decision-making under the General Data Protection Regulation (GDPR) and explores how to ensure transparency of such decisions. It discusses how the GDPR and Directive on Data Protection in Criminal Matters impose limitations on automated individual decision-making but also contain numerous exceptions allowing such decisions. It raises questions about what exactly needs to be revealed to individuals about algorithmic decisions and how algorithm-based decisions can be explained.

Uploaded by

DylanOSullivan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

International Journal of Law and Information Technology, 2019, 0, 1–31

doi: 10.1093/ijlit/eay017
Article

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


Do algorithms rule the world? Algorithmic
decision-making and data protection in the
framework of the GDPR and beyond
Maja Brkan *

A BS TR A C T
The purpose of this article is to analyse the rules of the General Data Protection
Regulation (GDPR) and the Directive on Data Protection in Criminal Matters on
automated decision-making and to explore how to ensure transparency of such deci-
sions, in particular those taken with the help of algorithms. Both legal acts impose
limitations on automated individual decision-making, including profiling. While these
limitations of automated decisions might come across as a forceful fortress strongly
protecting individuals and potentially even hampering the future development of
Artificial Intelligence in decision-making, the relevant provisions nevertheless contain
numerous exceptions allowing for such decisions. While the Directive on Data
Protection in Criminal Matters worryingly does not seem to give the data subject the
possibility to familiarize herself with the reasons for such a decision, the GDPR obliges
the controller to provide the data subject with ‘meaningful information about the logic
involved’ (Articles 13(2)(f), 14(2)(g) and 15(1)(h)), thus raising the much-debated
question whether the data subject should be granted a ‘right to explanation’ of the
automated decision. This article seeks to go beyond the semantic question of whether
this right should be designated as the ‘right to explanation’ and argues that the GDPR
obliges the controller to inform the data subject of the reasons why an automated deci-
sion was taken. While such a right would in principle fit well within the broader frame-
work of the GDPR’s quest for a high level of transparency, it also raises several queries:
What exactly needs to be revealed to the data subject? How can an algorithm-based de-
cision be explained? The article aims to explore these questions and to identify chal-
lenges for further research regarding explainability of automated decisions.
K E Y W O R D S : General Data Protection Regulation, automated decision-making, right
to explanation, algorithmic transparency, Article 22 GDPR, Article 11 Directive 2016/
680

I N TRO D UC T IO N
‘This is beautiful,’ acclaimed Fan Hui, the multiple European Go champion,
when he saw Google’s AlphaGo making an unconventional move during a Go

* Associate Professor, Faculty of Law, Maastricht University, The Netherlands. E-mail: maja.brkan@maas-
trichtuniversity.nl. The author is grateful to the anonymous reviewers for their valuable comments on an
earlier version of this article.

C The Author(s) (2019). Published by Oxford University Press. All rights reserved.
V
For permissions, please email: [email protected].

 1
2  Algorithmic decision-making and data protection

game.1 AlphaGo eventually won the ancient game, much more complex than chess,
and proved that machines could outperform humans . . . again.2 With the rise of in-

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


telligent machines, the input of Big Data and more or less complex algorithms, au-
tonomous artificially intelligent agents are becoming more and more powerful and
are enabling numerous decisions to be taken entirely automatically. The rapid in-
crease of the use of algorithms, processing large amounts of data, in financial, bank-
ing and insurance services, in medical services, in public administration and on the
stock markets, offers infinite possibilities of invention of new machine-learning algo-
rithms or configuration of existing ones, such as decision trees or neural networks.
The use of these algorithms boosts companies’ speed and arguably also the accuracy
of decision-making; at the same time, algorithmic decision-making can lead to biased
decisions, in particular when sensitive data such as race, ethnic origin or sexual orien-
tation are at stake.
Artificial Intelligence (AI) is expected to be the major trigger for the ‘fourth in-
dustrial revolution’ that is predicted to change the way our society functions and
how humans relate to each other, to alter the job market and job demands as well as
the entire industries which will take the path of digitalization. The European and glo-
bal society is witnessing an exponential technological advancement in the field of Big
Data and AI. In the recent years, technical developments in the field of robotics and
appertaining software have seen an undreamed-of progress, ranging from humanoid,
autonomous and care robots, autonomous vehicles, robot nannies and toys, to AI
agents used for predictive policing or medical diagnosis and more. Other examples
of deployment of AI are numerous, such as AI-supported voice-generating features
in smartphones such as Siri; personal assistants such as Alexa; voice, facial and pat-
tern recognition or automated profiling which enables companies to send targeted
advertising to their. The robots help paralysed people to walk and, in 2016, the first
autonomous robotic surgery took place.3 The latest trends in robotic innovation and
industry have enormous business potentials as well as ethical and legal limitations.
Big Data and AI are at the top of the EU’s agenda for digitalizing European economy
through Digital Single Market4 and the EU institutions are pioneering in the estab-
lishment of clear legal and ethical guidelines for the AI.5

1 See Cade Metz, ‘The Sadness and Beauty of Watching Google’s AI Play Go’ <https://www.wired.com/
2016/03/sadness-beauty-watching-googles-ai-play-go/> accessed 21 November 2018.
2 For example, an AI outperformed a human also in playing chess (AI Deep Blue) and Jeopardy (AI
Watson).
3 KG Orphanides, ‘Robot Carries out First Autonomous Soft Tissue Surgery’ <http://www.wired.co.uk/art
icle/autonomous-robot-surgeon> accessed 25 July 2018.
4 Communication from the Commission to the European Parliament, the Council, the European Economic
and Social Committee and the Committee of the Regions. A Digital Single Market Strategy for Europe
(COM/2015/192 final).
5 See for example, the European Parliament resolution of 16 February 2017 with recommendations to the
Commission on Civil Law Rules on Robotics, 2015/2103(INL); Communication from the Commission
to the European Parliament, the European Council, the Council, the European Economic and Social
Committee and the Committee of the Regions: Artificial Intelligence for Europe, COM(2018) 237 final;
Declaration of cooperation on Artificial Intelligence, signed by EU Member States <https://ec.europa.eu/
digital-single-market/en/news/eu-member-states-sign-cooperate-artificial-intelligence> accessed 26 June
2018; Draft AI Ethics Guidelines ‘Artificial intelligence, real benefits’ <https://ec.europa.eu/digital-single-
market/en/news/draft-ethics-guidelines-trustworthy-ai> accessed 18 December 2018.
Maja Brkan  3

However, the crucial role that the EU plays in this field raises many unanswered
questions. While European companies exponentially use Big Data and AI in their

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


business models, this approach not only needs to be embedded into a clear and con-
cise EU legal framework providing for privacy,6 transparency7 and accountability,8
but also has to respect ethical rules.9 Transformation into a true ‘Big Data and AI so-
ciety’ with broader use of Big Data mining and AI agents is only possible if the users
trust these agents. This links back to legal, technological, economic and ethical
queries: if technology is designed in a way to generate trust and enable compliance
with legal requirements of transparency and accountability, while respecting high
ethical demands, it leads to its increased use by businesses and higher competitive-
ness on the EU Digital Single Market.
Against this backdrop, the purpose of this article is to analyse the rules of the
General Data Protection Regulation (GDPR)10 on automated decision-making in
the age of Big Data and to explore how to ensure transparency of such decisions, in
particular those taken with the help of algorithms. The article thus analyses the rules
of the GDPR and the Directive on Data Protection in Criminal Matters11 on auto-
mated individual decision-making and examines their consequences for data subjects.
The article further elaborates on the necessity of safeguards in automated decision-
making, such as providing data subjects with an explanation of an automated
decision.

A U TOM A TE D D E C IS I ON -M A K IN G
Automated decision-making could be defined as taking a decision without human
intervention; according to the GDPR, ‘automated individual decision-making’ is ‘a
decision based solely on automated processing’.12 The human can of course feed the
system with data—although even this can be an automatic procedure—and interpret
the decision once it is taken. If the automated decision-making does not have any
binding effect on data subjects and does not deprive them of their legitimate rights,
such decision-making is of a low impact. However, when a decision is binding for

6 Ryan Calo, ‘Peeping HALs: Making Sense of Artificial Intelligence and Privacy’ (2010) 2 European Journal
of Legal Studies 3, 168–192.
7 Bryce Goodman and Seth Flaxman, ‘European Union regulations on Algorithmic Decision-making and a
“right to explanation”’ (2016) ICML Workshop on Human Interpretability in Machine Learning, New York
<https://arxiv.org/pdf/1606.08813v3.pdf> accessed 18 July 2017.
8 Andrea Bertolini, ‘Robots as Products: The Case for a Realistic Analysis of Robotic Applications and
Liability Rules’ (2013) 5 LIT 2, 214–47.
9 Brent Daniel Mittelstadt and others, ‘The Ethics of Algorithms: Mapping the Debate’ (2016) 3 Big Data &
Society 1, 1–21.
10 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the pro-
tection of natural persons with regard to the processing of personal data and on the free movement of
such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016,
1.
11 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protec-
tion of natural persons with regard to the processing of personal data by competent authorities for the
purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution
of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision
2008/977/JHA, L 119, 4.5.2016, p 89.
12 art 22(1) of the General Data Protection Regulation.
4  Algorithmic decision-making and data protection

individuals and affects their rights, by deciding for example whether a client should
be awarded credit, tax return or be employed, the law has to provide sufficient safe-

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


guards to protect this individual.13
Automated decision-making seems to encompass a multitude of decision types,
ranging from displaying search results, profiling,14 high-frequency trading,15 decisions
on granting of a loan by a bank, administrative decisions16 and to a certain extent
even judicial decisions.17 The notion of automated decision-making is not a unitary
concept, comprising only a particular type of decisions. Rather, it is broad, multifacet-
ed and prone to be divided into several sub-categories. Before analysing the provi-
sions of the GDPR and the Directive on Data Protection in Criminal Matters, it is
important to distinguish between procedural and substantive automated decision-
making; algorithmic and non-algorithmic automated decision-making; and rule-
based as opposed to law-based decisions.
Procedural/substantive. The procedural/substantive divide does not refer to taking
procedural or substantive decisions; it rather means that automated decisions will
have to be adopted in a way that guarantees procedural and substantive fairness and
accurateness. The requirement of procedural fairness requires that all decisions relat-
ing to the same or comparable facts are taken according to the same automated pro-
cedure.18 However, decisions also have to be substantively fair, meaning that they
should not be discriminatory in any way, especially not decisions taken with the help
of algorithms.19
Algorithmic/non-algorithmic. Algorithmic decision-making is automated decision-
making with the support of algorithms. There is no common definition of the notion
of algorithm across literature. However, it has to be specified that, in automated
decision-making, we are dealing with computer algorithms that can be defined as ‘a
set of steps to accomplish a task that is described precisely enough that a computer
can run it’.20 Many—if not most—automated decisions nowadays are taken with a

13 See further on efficiency and fairness in automated decision-making Tal Zarsky, ‘The Trouble with
Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and
Opaque Decision Making’ (2016) 41 Science, Technology, & Human Values 118–32.
14 More on profiling see Mireille Hildebrandt and Serge Gutwirth (eds), Profiling the European Citizen.
Cross-Disciplinary Perspectives (Springer 2008).
15 Frank Pasquale, The Black Box Society. The Secret Algorithms That Control Money and Information
(Harvard University Press 2015); Jacob Loveless and others, ‘Online Algorithms in High-frequency
Trading. The challenges faced by competing HFT algorithms’ (2013) 11 acmqueue 1.
16 Melissa Perry, ‘iDecide: Administrative Decision-Making in the Digital World’ (2017) Australian Law
Journal 29–34.
17 See Trevor Bench-Capon and Thomas F Gordon, ‘Tools for Rapid Prototyping of Legal Cased-Based
Reasoning’ (2015) ULCS-15-005, University of Liverpool, UK; Giovanni Sartor and Luther Branting
(eds), Judicial Applications of Artificial Intelligence (Springer 1998); Ang_ele Christin, Alex Rosenblat and
Danah Boyd, ‘Courts and Predictive Algorithms’ (2015) Data & Civil Rights: A New Era of Policing
and Justice <http://www.law.nyu.edu/sites/default/files/upload_documents/Angele%20Christin.pdf>
accessed 16 January 2018.
18 On the concept of procedural regularity in automated decision-making, see Joshua A Kroll and others,
‘Accountable Algorithms’ (2017) 165 University of Pennsylvania Law Review 637, 638, 641, 656ff.
19 On algorithmic discrimination see for example Bryce Goodman, ‘Discrimination, Data Sanitisation and
Auditing in the European Union’s General Data Protection Regulation’ (2016) 2 European Data
Protection Law Review 4, 493–506.
20 Thomas H Coormen, Algorithms Unlocked (MIT Press 2013) 1.
Maja Brkan  5

support of algorithms. With the increasing use of Big Data and more and more com-
plex decision-making, algorithmic intervention has become almost indispensable.

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


Rule-based/law-based automated decisions. Both ‘rule-based’ and ‘law-based’ deci-
sions are taken on the basis of rules, but the source of the rule for both types of deci-
sions is different. For rule-based decisions the rule is mostly an outcome of a
business decision, for example profiling for the purposes of targeted advertising (eg a
company sending an advertisement about vacation in Bali to all people searching for
vacation in Asia). The law-based decisions are based on a legal rule that is binding
on everyone. An example is a rule prescribing that everyone who exceeds the speed
limit gets a fine. Unless the law-based rule is very clear and precise, automated deci-
sions based on law have to face a challenge of law’s open texture and notions requir-
ing interpretation. Autonomous decision-making presupposes that the rules to be
applied are not prone to interpretation and do not leave to the decision-maker much
or any discretion in taking the decision.

A U TO MA TE D I ND IV ID U A L D EC I SI ON - MA K I NG IN TH E EU D A TA
P RO TE CT IO N L E GI SL A TI ON
This section analyses the provisions of the GDPR and the Directive on Data
Protection in Criminal Matters on automated decision-making and explains when
such decision-making is possible and under which conditions. Given that the auto-
mated decision-making is used much more regularly in commercial setting than for
the purposes of law enforcement, the focus of this section—and of this article more
generally—is on the analysis of the GDPR provisions rather than those of the said
directive. The GDPR regulates automated individual decision-making, including
profiling, in its Article 22. The first paragraph of this provision stipulates: ‘[t]he data
subject shall have the right not to be subject to a decision based solely on automated
processing, including profiling, which produces legal effects concerning him or her or
similarly significantly affects him or her.’ Article 11 of the Directive on Data
Protection in Criminal Matters takes a comparable stance towards automated indi-
vidual decision-making by imposing an obligation on Member States to prohibit ‘a
decision based solely on automated processing, including profiling, which produces
an adverse legal effect concerning the data subject or significantly affects him or her’.
The provision of the Directive on Data Protection in Criminal Matters builds
upon an equivalent provision of the predecessor of this legal act, that is, the Council
Framework Decision 2008/977/JHA.21 Article 22 of the GDPR continues the legacy
of the 1995 Data Protection Directive,22 more precisely its Article 15, according to
which the data subject equally had the right not to be subject to a decision producing
legal effects or significantly affecting her and which was based solely on automated
processing of data. While the wording of the provision did not undergo substantial

21 See art 7 of the Council Framework Decision 2008/977/JHA of 27 November 2008 on the protection of
personal data processed in the framework of police and judicial cooperation in criminal matters, OJ L
350, 30.12.2008, 60.
22 Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protec-
tion of individuals with regard to the processing of personal data and on the free movement of such data,
OJ L 281, 23.11.1995, 31.
6  Algorithmic decision-making and data protection

changes with the adoption of the GDPR,23 the practical importance of the provision
increased with augmented use of automated decision-making. The provision of Data

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


Protection Directive also enumerated some examples of automated individual deci-
sions, namely decisions ‘to evaluate certain personal aspects relating to [data subject]
such as his performance at work, creditworthiness, reliability, conduct, etc’. These
examples demonstrate that the provision of the Data Protection Directive seemed to
focus mostly on instances of profiling based on automated processing of data, not
including other types of automated decision-making. The GDPR puts some flesh on
the bones of the provision by providing examples of automated decision-making,
that is, ‘automatic refusal of an online credit application or e-recruiting practices
without any human intervention’ (Recital 71).
Against this backdrop, it is interesting to observe how Article 22 GDPR devel-
oped throughout the legislative procedure. The provision evolved from focusing spe-
cifically on profiling to a more general formulation using a broader notion of
automated individual decision-making.24 In the initial Commission’s proposal, differ-
ently from the final GDPR, this provision, titled ‘Measures based on profiling’,25
regulated only profiling based on automated processing. The scope of application of
the initial provision thus seems to be more limited since it only applied to profiling.
Moreover, the initial provision contained a separate paragraph on the obligation to
inform the data subject about the existence of automated processing and ‘the envis-
aged effects of such processing on the data subject’.26 In the final version, however,
the obligation to inform the data subject about such processing was moved to
Articles 13 and 14 obliging the controller to provide certain information to the data
subject. While in the first reading in the European Parliament, the provision kept the
focus on profiling, the paragraph on informing the data subject about the envisaged
effects of profiling was deleted.27 In the first reading in the Council, the provision
then took the shape of the current Article 22 GDPR, not restricting the scope of the
article merely on profiling, but rather including profiling into a more general category
of individual automated decision-making.28 Nevertheless, as mentioned in the GDPR

23 Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated
Decision-Making does not Exist in the General Data Protection Regulation’ (2017) 7 International Data
Privacy Law 2, 81 and fn 66, point out that the text of art 22 GDPR has not been changed much com-
pared to art 15 of the Data Protection Directive.
24 Compare Wachter, Mittelstadt and Floridi (n 23) 81, who equally analyse the legislative developments of
art 22 GDPR, but rather from the perspective of how the (non-)inclusion of the right to explanation into
the text of the Regulation developed in the course of the legislative procedure.
25 art 20 of GDPR proposal, COM(2012) 11 final.
26 ibid, art 20(4).
27 art 20 (Profiling), European Parliament legislative resolution of 12 March 2014 on the proposal for a
regulation of the European Parliament and of the Council on the protection of individuals with regard to
the processing of personal data and on the free movement of such data (General Data Protection
Regulation), COM(2012)0011—C7-0025/2012—2012/0011(COD) (Ordinary legislative procedure:
first reading).
28 Position of the Council at first reading with a view to the adoption of a Regulation of the European
Parliament and of the Council on the protection of natural persons with regard to the processing of per-
sonal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data
Protection Regulation)—Adopted by the Council on 8 April 2016.
Maja Brkan  7

proposal,29 this provision still takes into consideration the Recommendation on


profiling issued by the Council of Europe.30

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


Regardless of a broader formulation of the GDPR, it is questionable to what extent
the scope of application of Article 22 GDPR covers decision-making that is indeed
broader than decisions based on profiling.31 The data subject has a right not to be sub-
ject to a decision based exclusively on ‘automated processing’, and profiling is a type of
processing mostly leading to such decisions. According to the GDPR, profiling signifies
processing of personal data in a way to use it to ‘evaluate certain personal aspects relat-
ing to a natural person’, such as for example ‘to analyse or predict aspects concerning
that natural person’s performance at work, economic situation, health, personal prefer-
ences, interests, reliability, behaviour, location or movements’ (Article 4). It is difficult
to imagine examples where person’s personal data not leading to profiling would lead to
an automated decision. A potential example would be automated application of tax rules
in order to determine how much tax return a tax resident would get. However, would
that decision again not be based on her personal tax profile? There are automated deci-
sions and predictions that do not amount to profiling, such as high-frequency trading or
predictions of outcomes of judicial decisions, but they do not involve processing of per-
sonal data and would thus not fall within the ambit of Article 22 GDPR.
Article 22 GDPR reflects, on the one hand, European scepticism towards biases
and potentially false decisions that can be taken by automated means if they are not
verified by humans. On the other hand, this provision, by giving certain guarantees
to the data subject, notably the right to human intervention, addresses concerns
around the lack of ability of data subjects to influence decisions which are to an
increasing extent taken by automated means.32 On a first glance, the general negative
stance towards automated decisions comes across as a forceful fortress for strongly
protecting individuals and potentially even hampering the future development of AI
in decision-making. However, on a more comprehensive level of evaluation, it can be
argued that this provision, containing numerous limitations and exceptions, looks ra-
ther like a Swiss cheese with giant holes in it.

Automated decisions barred by the EU data protection legislation


Article 22 GDPR is placed within the chapter of this regulation containing rights of
the data subject.33 Quite differently, Article 11 of the Directive on Data Protection in

29 Proposal for a Regulation of the European Parliament and of the Council on the protection of individuals
with regard to the processing of personal data and on the free movement of such data (General Data
Protection Regulation), COM(2012) 11 final, 9.
30 Recommendation CM/Rec(2010)13 of the Committee of Ministers to member states on the protection
of individuals with regard to automatic processing of personal data in the context of profiling (Adopted
by the Committee of Ministers on 23 November 2010 at the 1099th meeting of the Ministers’ Deputies).
31 Isak Mendoza and Lee A Bygrave, ‘The Right not to be Subject to Automated Decisions based on Profiling’
University of Oslo Faculty of Law Legal Studies Research Paper Series No 20/2017 <https://papers.ssrn.
com/sol3/papers.cfm?abstract_id¼2964855> accessed 11 July 2018, 7, rightly point out that the legislative
process leading to the adoption of the GDPR leads to the result that it does not give the right to the data
subject to object to all profiling, but only to certain types of decisions arising from profiling.
32 ibid.
33 See Chapter III, ‘Rights of the data subject’, s 4, ‘Right to object and automated individual decision-mak-
ing’, of the GDPR.
8  Algorithmic decision-making and data protection

Criminal Matters34 appears in the chapter on principles, preceding the one on rights
of the data subject. The latter provision is also phrased rather differently. Apart from

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


the fact that it is addressed to the EU Member States—which is logical since it
appears in a directive—the latter article also contains a clear prohibition of auto-
mated decision-making rather than a right that can be invoked against such a deci-
sion. Already this discrepancy between the two sets of legal rules could potentially
indicate that it might be difficult to understand the right not to be subject to auto-
mated decision making as an entitlement of the data subject which she needs to
invoke. It is therefore crucial to analyse the nature of the ‘right’ of the data subject
not to be subjected to automated decision-making from Article 22 GDPR. This right
can be understood either as a right that the data subject has to actively exercise or as
a ‘passive’ right that the controllers taking an automated decision have to observe
themselves without an active claim from the data subject.35
If the ‘right’ from Article 22(1) GDPR is constructed in a former way, the exercise
of the right would depend on data subject’s free will and her choice. If the data sub-
ject chooses to exercise this right and requires the decision not to be based solely on
automated processing, the data controller would have three possibilities: first, it
would either take the decision with human intervention, that is, decision which is
not based solely on automated processing; second, it would be barred from taking
the final decision with automated means if no exceptions from Article 22(2) apply;
or third, it would still be able to take an automated decision in case of applicability of
one of the exceptions from Article 22(2).36 In order for a decision not to be based
exclusively on automated processing (first option), the controller would need to
make use of human intervention having capacity to alter the automated decision.
The WP29 interprets this notion as a meaningful oversight of a human over the deci-
sion that is exercised by ‘someone who has the authority and competence to change
the decision’.37 Mere rubberstamping of a decision taken by an algorithm would not
qualify as such human involvement.38
However, not choosing to exercise this right would lead to the result that
automated decisions having the characteristics described in Article 22(1) could be
lawfully taken.39 If such a decision would be necessary to enter into or perform a
contract (22(2)(a)) or be based on data subject’s explicit consent (22(2)(c)), the
controller would be obliged to provide the data subject with safeguards from Article
22(3). However, if none of the exceptions from Article 22(2) applied and the data
subject did not exercise her ‘right’ to oppose automated decision-making, it would be
possible to take a fully automated decision having legal consequences for the data

34 See Chapter II, ‘Principles’, of the Directive on Data Protection in Criminal Matters.
35 Wachter, Mittelstadt and Floridi (n 23) 94, point out that Article 22 can be interpreted either as a prohib-
ition or as a right to object and that the two interpretations ‘are differentiated by whether action is
required by the data subject to restrict automated decision-making’.
36 See in this sense Wachter, Mittelstadt and Floridi (n 23) 95.
37 WP29, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation
2016/679, adopted on 3 October 2017; as last revised and adopted on 6 February 2018, 17/EN,
WP251rev.01, 21.
38 ibid., WP29 also stresses that the requirements of Article 22 cannot be avoided by merely ‘fabricating
human involvement’.
39 Compare Wachter, Mittelstadt and Floridi (n 23) 95.
Maja Brkan  9

subject40 without providing her with necessary safeguards from paragraph 3 of that
provision. This is because safeguards need to be provided only when one of the

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


exceptions applies: in case of contract or consent safeguards from Article 22(3) and
in case of automated decision on the basis of Union or Member State law safeguards
provided for by that law. Legal or other consequences of such a decision, taken as a
result of a failure of the data subject to exercise her right, could be extremely un-
favourable for the data subject.
Interpreting Article 22(1) as giving data subject the right that she has to actively ex-
ercise could in consequence lead to detrimental effects for her and run contrary to the
purpose of this provision which aims to protect data subjects against risks related to
automated decision-making. A systematic interpretation of Article 22 implies that only
automated decisions fulfilling the requirements of paragraph 2 and allowing for safe-
guards from paragraph 3 of this provision are authorized by the GDPR. Therefore, as
pointed out by Mendoza and Bygrave, it is more appropriate to construct the data sub-
jects’ ‘right’ as a prohibition of fully automated decision-making that the data controllers
have to comply with.41 This position is confirmed by the guidelines of Article 29
Working Party which state that Article 22(1) ‘establishes a general prohibition’ of auto-
mated decision-making,42 meaning ‘that individuals are automatically protected from
the potential effects this type of processing may have’.43 Such interpretation of Article
22(1) furthermore aligns this provision to the wording of Article 11 of the Directive on
Data Protection in Criminal Matters which imposes on the Member States a clear obli-
gation to prohibit automated decisions having certain characteristics.44 Interpreting the
two sets of rules in a relatively coherent manner would contribute to ensuring coher-
ence of a broader system of data protection within the EU.
Constructing data subjects’ ‘right’ as a general prohibition of certain types of auto-
mated decisions also sheds a different light on conditions from both Article 22(1)
GDPR and Article 11 of the Directive. On the basis of this reading, a decision having
the following characteristics is prohibited by this provision: (i) the decision has to be
individual,45 (ii) it needs to be based solely on automated processing and (iii) it
needs to have legal or significant effects on the data subject (the Directive contains
an additional requirement of ‘adverse’ legal effects).
From that perspective, the first condition has to be understood as prohibiting in-
dividual automated decisions, that is, decisions relating only to a particular natural46

40 Compare ibid 95.


41 Mendoza and Bygrave (n 31) 9.
42 WP29, Guidelines on Automated individual decision-making, 19.
43 ibid 20.
44 Curiously, Recital 38 of the Directive on Data Protection in Criminal Matters phrases this provision as a right
rather than a prohibition by stating that the ‘data subject should have the right not to be subject to a decision
. . . based solely on automated processing’. While the reasons for using the terminology of a ‘right’ rather
than ‘prohibition’ in the preamble of this directive remain unclear, the same considerations in favour of read-
ing this provision as a prohibition, relevant regarding the GDPR, apply also for the said Directive.
45 While this condition is not specifically mentioned in the body of either art 22 GDPR or art 11 of the
Directive, it can be inferred from the title of both provisions as well as from the circumstance that both
provisions refer to decision concerning data subject (in singular and not in plural).
46 According to art 1(1) of both the GDPR and the Directive, these legal instruments protect only data of
natural and not legal persons.
10  Algorithmic decision-making and data protection

person, a single data subject. Individual decisions can be binding on an individual


(such as a decision on loan application, credit card application, welfare and financial

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


decisions, granting a visa, choosing a taxpayer for audit) or non-binding (such as
profiling, eg sending targeted advertisements to an air traveller on the basis of her
profile). In line with the general scope of application ratione personae of both the
GDPR and the Directive which, according to Article 1(1), regulate only the protec-
tion of individuals and not groups, the textual interpretation of Article 22 GDPR and
Article 11 of the directive seem to exclude ‘collective’ decisions affecting several nat-
ural persons or a group of those linked together either by virtue of their common
characteristics, their belonging to a group or their living in a particular area.47 Taking
a collective decision could have a particularly important impact on data subjects in
criminal matters, such as machine-based decision taken by the police to increase po-
lice monitoring in a certain geographical area, affecting all data subjects residing in
this area. A collective decision in non-criminal matters would, for instance, be a deci-
sion on dynamic pricing, selling certain product for a certain price to a category of
data subjects belonging to a certain income bracket.
In their current wording, neither Article 11 of the Directive nor Article 22 GDPR,
read together with their respective Articles 1, would ratione personae cover such a col-
lective decision, leading to the result that such decisions would be allowed. Indeed,
both the GDPR and the Directive seem to follow the logic that the underlying ra-
tionale for data protection of groups of data subjects differs from the rationale for
data protection of an individual data subject. An argument that is sometimes put for-
ward in this regard is that a collective decision is not necessarily linked to personal
data of a particular individual, but can be easily based on anonymized data which
would render EU data protection legislation inapplicable.48 However, anonymization
of data is not sufficient as long as the data subject remains identifiable.49 With an
increasing importance and use of Big Data, re-identification of an individual apper-
taining to a certain group is significantly facilitated.50 Classifying data subjects into a
specific category (man/woman, low/high income) enables collective decisions per-
taining to this group. Moreover, Big Data greatly enables group-related automated
decision-making which might repeat or even strengthen the bias favouring a particu-
lar non-objective outcome.51 Excluding collective automated decisions from the

47 For a collective data protection aspect in the age of Big Data analytics see Alessandro Mantelero,
‘Personal Data for Decisional Purposes in the Age of Analytics: From an Individual to a Collective
Dimension of Data Protection’ (2016) 32 Computer Law & Security Review 2, 238–55.
48 According to Recital 26 GDPR and Recital 21 of the Directive on Data Protection in Criminal Matters,
the ‘principles of data protection should . . . not apply to anonymous information’.
49 Personal data are defined in art 4(1) GDPR and 3(1) of the Directive as ‘any information relating to an iden-
tified or identifiable natural person’ (emphasis added). On identifiability, see Worku Gedefa Urgessa, ‘The
Protective Capacity of the Criterion of “Identifiability” under EU Data Protection Law’ (2016) 2 European
Data Protection Law Review 4, 521–31. Furthermore, Moerel and van der Wolk point out that, in the Big
Data environment, it is difficult to fully anonymise the data and ‘retain the functionality and usability of the
underlying information’; see Lokke Moerel and Alex van der Wolk, ‘Big Data Analytics Under the EU
General Data Protection Regulation’ <https://ssrn.com/abstract¼3006570> accessed 15 June 2018, 31.
50 Further on tension between Big Data and art 22 GDPR, see Tal Zarsky, ‘Incompatible: The GDPR in the
Age of Big Data’ (2017) 47 Seton Hall Law Review 4, 1017ff.
51 Dennis Broeders and others, ‘Big Data and security policies: Towards a Framework for Regulating the
Phases of Analytics and Use of Big Data’ (2017) 33 Computer Law & Security Review 3, 314, point out
Maja Brkan  11

scope of application of GDPR would not only create an enormous imbalance in how
individual and collective automated decisions are treated, but could also open the

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


door to circumvent the prohibition of individual automated decisions by adopting
collective decisions whenever possible.52 Therefore, in the light of the high level of
protection of data subject, it is submitted that collective automated decisions should
be covered by the scope of application of Article 22 GDPR and Article 11 of the
Directive on Data Protection in Criminal Matters. A possible way to include such
decisions into the scope of application of these two legal instruments is to consider
the decision regarding a group as a bundle of individual decisions. A purposeful
(teleological) interpretation of Article 22 GDPR and Article 11 of the Directive on
Data Protection in Criminal Matters, coupled with the need to guarantee the data
subject a high level of data protection should lead the Court of Justice of the EU
(CJEU) to adopt this interpretative stance.
Second, the GDPR and the Directive on Data Protection in Criminal Matters do
not allow for a decision to be based ‘solely’ on automated processing. Whether a de-
cision is fully automated or not depends, in the first place, on whether human inter-
vention can be embedded in the process of automated decision-making. For
example, if the price of a product sold online is determined on the basis of a data
subject’s income and the price is shown automatically on the website, it could poten-
tially be very cumbersome to require the controller to manually verify every price
determined in that way. Such a decision would hence be based solely on automated
processing, but it would be prohibited by Article 22 GDPR only if it also fulfilled
other requirements from this provision.53 However, if the process allows for human
intervention, it is to be verified whether a ‘minimal’ involvement of a human having
the power to change an automated decision, automatically renders this decision not
to be based solely on automated processing.54
As clarified in the WP 29 Guidelines on automated decision-making, the answer
to this question should be in the negative.55 Indeed, a formalistic interpretation,
involving the human only as a necessary part of procedure but ultimately leaving the
decision power to the machine, would not ensure a sufficiently high level of data pro-
tection of the data subject. In order for the decision not to be based solely on auto-
mated processing, the human should assess56 the substance of the decision and not
be involved merely as another (empty) procedural step. In other words, in order to
escape the prohibition from Article 22 GDPR or Article 11 of the Directive on Data
that ‘Big Data analyses may reinforce social stratification by re-producing and reinforcing the bias that is
present in every dataset’.
52 Given that a cluster of multiple data subjects does not necessarily constitute a group of data subject with
the same or similar characteristics, this might not always be possible.
53 In particular, it is doubtful whether such a decision would have legal or similar significant effects which
could potentially bring it outside of the scope of art 22 GDPR. The existence of legal or significant effect
would of course need to be determined on a case by case basis.
54 Wachter, Mittelstadt and Floridi (n 23) 92, seem to be of that opinion by pointing out that ‘the phrase
“solely” suggests even some nominal human involvement may be sufficient’ and that the legislative pro-
cess seems to show that ‘the strict reading of “solely” was intended’.
55 WP29, Guidelines on Automated individual decision-making, 20–21.
56 Mendoza and Bygrave (n 31) 10, point out that the human has to ‘actively assess the result of the proc-
essing prior to its formalisation as a decision’.
12  Algorithmic decision-making and data protection

Protection in Criminal Matters, the human has to use the machine only as a decision
support. If the human intervention fulfils these criteria and renders the decision not

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


fully automated, this decision is no longer within the scope of Article 22 GDPR.57
Third, the GDPR and the Directive on Data Protection in Criminal Matters pre-
vent only decision-making, including profiling, which produces legal effects (in case
of Directive, ‘adverse’ legal effects) for the individual or significantly affects the indi-
vidual (in case of GDPR, ‘similarly’ significantly affects). Even though neither of the
two legal instruments defines the notion of legal effects, it can be assumed that a de-
cision having legal effects is a binding decision that impacts a legal position or legal
interests of a data subject.58 For example, a decision of a tax authority on tax return
of a data subject, calculated on the basis of her income, is a decision having legal
effects relating to this data subject within the meaning of the GDPR. A decision
taken by a police to arrest a data subject or to seize her mobile device, taken on the
basis of her personal data, is a decision having adverse legal effects on that person
within the meaning of the said Directive. ‘Legal effects’ are an objective benchmark
which is applicable in the same way for all data subjects.59
While it seems relatively straightforward to determine which decision would have
legal effects on an individual, it is less clear what kind of decision-making or profiling
‘(similarly) significantly affects’ such an individual. This is because the same auto-
mated decision can have very different impact on different data subjects. For ex-
ample, automatic determination of monthly salary on the basis of productivity of a
worker might have a much bigger impact on a blue-collar worker with a minimum
wage whose productivity dropped due to health issues compared to a white-collar
worker who was less productive because she was searching to buy a new house dur-
ing working hours. An analysis of the existence of significant effect will therefore ne-
cessitate a careful analysis of subjective circumstances of the data subject.60 GDPR
gives examples of a refusal of an online credit application or the use of automated
decision-making in e-recruiting practices. These are instances where the data subject
acts as an applicant for a credit card, an insurance contract with a certain premium or
a job position. It is important to note that, as clarified by the WP29, a ‘similar’ signifi-
cant effect does not mean that this effect needs to have any legal implications for the
data subject; rather, ‘similar’ refers to the significance and not the nature of the ef-
fect.61 WP29 gives further examples of such significant effect: automated decisions
affecting data subject’s financial circumstances, access to health services or
education.62

57 Zarsky (n 50) 1016.


58 In view of WP29, a decision having a legal effect is a decision affecting data subject’s legal rights, legal sta-
tus or her rights under a contract; see WP29, Guidelines on Automated individual decision-making, 21.
Compare also Wachter, Mittelstadt and Floridi (n 23) 93.
59 Compare Wachter, Mittelstadt and Floridi (n 23) 93, who point out that legal effect ‘can be determined
according to the letter of the law’.
60 Wachter, Mittelstadt and Floridi (n 23) 93, claim that significance depends ‘on the perception of the data
subject’. This author believes that, while data subject’s perception might be the first indication of the deci-
sion having a significant impact on her, determination of significant effect requires a much more in-depth
analysis of data subject’s circumstances.
61 WP29, Guidelines on Automated individual decision-making, 21.
62 ibid.
Maja Brkan  13

In some cases, however, the existence of significant effect seems less straightfor-
ward. In the GDPR framework, when does sending advertisements by Google and

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


Facebook significantly affect an individual?63 Given different potential impacts that
such targeted advertising can have on a data subject, it is close to impossible to clear-
ly answer this question in the affirmative or negative. For example, if the data subject
ignores such targeted advertising and does not follow up on it, it is rather difficult to
argue that the advertising significantly affects this data subject. To the contrary, if a
person systematically shapes her purchasing decisions on the basis of such targeted
advertising, the significant effect would be more easily established. In view of WP29,
online advertising in principle does not have significant effect on data subjects, ex-
cept for example in cases where profiling on which advertisement is based is particu-
larly intrusive, targets vulnerabilities of data subjects or bars data subjects from
acquiring goods and services due to prohibitively high prices.64 This raises the ques-
tion whether, for the criterion of significant effect to be fulfilled, a causal link be-
tween the profiling and the action of the data subject needs to be established.
Requirement of existence of a causal link would, on the one hand, ensure that only
limited instances of targeted advertising would have significant effect on the data sub-
ject. On the other hand, the requirement of such a causal link would render the ana-
lysis of significant effect extremely complex. The question of necessity of a causal
link further raises the question of burden of proof: requiring data subject to prove
such a causal link might be very cumbersome for her, especially if the threshold for
causality is rather high. Similarly, reversing the burden of proof and requiring the
controller to demonstrate the absence of a causal link might place the controller in a
difficult position, as the reasons for data subject’s decision to follow a particular ad-
vertisement might be rather non-transparent and subjective. An alternative test to es-
tablish significant effect in such cases would be to take as a benchmark an average
consumer rather than the actual consumer on which the advertising was targeted.
However, such a solution might not take into account particular vulnerabilities of a
data subject, such as sicknesses, addictions, anxieties or traumatic past experiences.
Moreover, both Article 22 GDPR and Article 11 of the Directive require that the de-
cision significantly affects a particular data subject (‘him or her’) and not an average
one. In any event, even though neither of those provisions explicitly require the exist-
ence of a causal link, it will still need to be proven whether a certain decision signifi-
cantly affected the data subject which might, in practice, be just as cumbersome as
establishing a causal link.65

Automated decisions authorized by the EU data protection legislation


The GDPR in Article 22(2) and the Directive on Data Protection in Criminal
Matters in Article 11 expressly authorise certain types of automated decisions.

63 Targeted advertisement will play an insignificant or even no role with regard to the Directive on Data
Protection in Criminal Matters as this Directive relates to data protection ‘by competent authorities for
the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execu-
tion of criminal penalties’ (Article 1(1)).
64 WP29, Guidelines on Automated individual decision-making, 22.
65 Wachter, Mittelstadt and Floridi (n 23) 93, correctly stress that ‘it may cause a burden for the data subject
to prove that processing affects them significantly’.
14  Algorithmic decision-making and data protection

According to the GDPR, the prohibition from paragraph 1 of that provision does
‘not apply if the decision: (a) is necessary for entering into, or performance of, a con-

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


tract between the data subject and a data controller; (b) is authorised by Union or
Member State law to which the controller is subject . . .; or (c) is based on the data
subject’s explicit consent.’ The Directive authorises only decisions ‘authorised by
Union or Member State law to which the controller is subject’. As the directive
relates to data protection in law enforcement setting, it is understandable that this
can be the only legal ground for automated decision-making.
The first category of automated decisions authorized by the GDPR are those that
are ‘necessary’ to enter into or perform a contract between a data subject and a data
controller. Depending on what the criterion of necessity refers to, this provision can
have different interpretations. If the meaning of this provision is to be constructed
on the basis of a very strict textual interpretation, it is questionable whether it would
ever open the door for automated decisions. In other words, it is doubtful whether
such decision-making can ever be necessary for entering into or performing a con-
tract, given that the same decision can (almost) always be taken manually. For ex-
ample, it can be argued that the conclusion of an insurance contract or a loan
contract certainly necessitates an assessment of risk—but does this risk necessarily
need to be assessed by automated means? Prices of flights are often determined
through dynamic pricing, taking into account the profile of the potential buyer—but
is such an automated determination of price really necessary for the conclusion or
performance of this purchase contract? The only logical answer to these questions is
in the negative. Therefore, it is submitted that the ‘necessity’ requirement will have
to be understood more as an ‘enabling’ requirement for the conclusion of a contract.
For example, if an automated assessment of a credit risk enables conclusion of a con-
tract on the basis of which a data subject receives a credit card, such a situation
would be covered by the exception from Article 22(2)(a). Sometimes these contracts
are termed ‘algorithmic contracts’66 and are ever more frequent in online trading,
Amazon being the most used example.
The WP 29 chose for an even more minimalistic interpretation of the necessity
requirement which is fulfilled already if automated decision is necessary ‘for contrac-
tual purposes’ or even ‘with the intention of entering into a contract with a data sub-
ject’.67 The example given is reducing a number of job applications to relevant ones
with the help of an algorithm.68 However, in view of this author, such understanding
unnecessarily broadens the scope of this exception. In addition, the given example
contravenes the purpose of the GDPR which expressly warns against automated e-
recruiting practices.69 Shortlisting of candidates with automated means is, in fact,
part of e-recruiting. A mere potential of entering into contract with a data subject
should she be chosen for a job should not be sufficient for triggering the exemption
from Article 22(2)(a) GDPR. Following the logic of ‘intention’ of concluding a

66 More on this type of contracts see Lauren Henry Scholz, ‘Algorithmic Contracts’ (2017) Stanford
Technology Law Review <https://papers.ssrn.com/sol3/papers.cfm?abstract_id¼274770> accessed 10
November 2018.
67 WP29, Guidelines on Automated individual decision-making, 23.
68 ibid.
69 Recital 71 GDPR.
Maja Brkan  15

contract with data subject could potentially bring under this exception also targeted
advertising which is done with a direct and unambiguous intention of contract con-

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


clusion. Therefore, ‘necessity’ requirement cannot be reduced to merely remote po-
tentiality of conclusion of a contract. Typical examples of contracts where this
exception could play a role are electronic contracts for sale and contracts for supply
of digital goods, services or automatically personalised software.
Second, fully automated decisions and profiling are allowed if they are authorized
by Union or Member State law that provide for sufficient safeguards to protect data
subject’s rights, freedoms and legitimate interests. This author is not aware of any
EU legislation to date that would expressly allow for fully automated decision-
making within the meaning of Article 22(2)(b) GDPR or Article 11 of the Directive
on Data Protection in Criminal Matters. To the contrary, other EU legislation per-
mits automated decisions only as a decision support, such as the PNR Directive.70
While the PNR Directive in principle prohibits automated decision ‘that produces an
adverse legal effect on a person or significantly affects a person’ (Article 7(6)), it pro-
vides for the possibility of automated matching or identification of persons who
should be further examined by the competent authorities in view of potential in-
volvement in terrorism, but only if such matching is individually reviewed by non-
automated means.71 It seems that, at least theoretically, a legal possibility of fully
automated decisions is still a matter of the future.72
Furthermore, an example of a Member State law regulating automated decision-
making is the German law implementing the GDPR73 which expressly allows for
automated decisions in the field of insurance. On the one hand, an automated deci-
sion is allowed if it is taken in the framework of performance of an insurance contract
and the request of the person in question was approved. As it is clarified by the
explanations of this German law, this provision allows for such an automated deci-
sion in tortious relationship between the insurance company of the person who
caused damage and the person who suffered damage, under the condition that the
latter wins with her claim.74 On the other hand, German law also allows for an auto-
mated decision about insurance services of a private health insurance when the deci-
sion is based on binding rules on remuneration for medical treatment.75 Finally, the
German administrative law also allows for an automated adoption of administrative
acts in the framework of fully automated administrative procedure.76 By way of

70 Directive (EU) 2016/681 of the European Parliament and of the Council of 27 April 2016 on the use of
passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terror-
ist offences and serious crime, OJ L 119, 4.5.2016, 132.
71 art 6(5) of the Directive (EU) 2016/681 of the European Parliament and of the Council of 27 April 2016
on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecu-
tion of terrorist offences and serious crime, OJ L 119, 4.5.2016, 132; compare also para 2 of this provi-
sion; and Mendoza and Bygrave (n 31) 6.
72 In practice, however, many decisions are already taken without human intervention.
73 See § 37 (Automatisierte Entscheidungen im Einzelfall einschließlich Profiling) of the Geman Gesetz zur
Anpassung des Datenschutzrechts an die Verordnung (EU) 2016/679 und zur Umsetzung der Richtlinie
(EU) 2016/680 (Datenschutz-Anpassungs- und -Umsetzungsgesetz EU – DSAnpUG-EU).
74 See ibid 106: explanations to § 37.
75 ibid.
76 See § 35a of the German Verwaltungsverfahrensgesetz (VwVfG) and explanation to § 37 of DSAnpUG-
EU above.
16  Algorithmic decision-making and data protection

comparison, the Dutch law contains a more general implementation clause of Article
22(1) GDPR by allowing automated decision-making if this is necessary to comply

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


with a statutory obligation imposed on the controller or necessary for the perform-
ance a task of general interest.77 Despite its general formulation, this provision of
Dutch law still goes beyond the exceptions provided for in Article 22(2) GDPR and
creates an additional legal basis for lawful automated decisions.
Third, the GDPR also allows for automated decision-making if such a decision is
based on the explicit consent of the data subject. The GDPR only clarifies conditions
for consent,78 but not for ‘explicit’ consent, a notion further explained by the WP29
in its Guidelines on consent.79 According to these Guidelines, explicit consent
‘means that the data subject must give an express statement of consent’, such as a
signed written statement, filling out an e-form, or using electronic signature.80 In cer-
tain cases, notably with regard to decisions based on profiling, where the data subject
has to give her consent online, it can be questionable whether the consent obtained
online was indeed explicit or not. Online profiling is often done without data subject
even knowing about it81 and if the data subject did not give explicit consent for
profiling, she also did not consent to a decision taken on the basis of such profiling.
For example, an explicit consent to cookies, without further explanation that that the
use of cookies could lead to profiling and without a button allowing the data subject
to accept or reject such processing, should not necessarily mean consent to an auto-
mated decision based on such profiling. While the GDPR allows for profiling under
certain requirements,82 the decisions based on profiling should conform to certain
safeguards.83 Finally, it is understandable that the Directive on Data Protection in
Criminal Matters does not contain the exception based on consent as requiring con-
sent from a criminal suspect could undermine the effectiveness of crime prevention.

77 See art 40(1) of the Dutch Regels ter uitvoering van Verordening (EU) 2016/679 van het Europees
Parlement en de Raad van 27 April 2016 betreffende de bescherming van natuurlijke personen in verband
met de verwerking van persoonsgegevens en betreffende het vrije verkeer van die gegevens en tot intrek-
king van Richtlijn 95/46/EG (Uitvoeringswet Algemene verordening gegevensbescherming).
78 art 7 GDPR.
79 See art 29 Working Party, Guidelines on consent under Regulation 2016/679, adopted on 28 November
2017; as last revised and adopted on 10 April 2018.
80 ibid 18.
81 art 29 Working Party, ‘Advice paper on essential elements of a definition and a provision on profiling
within the EU General Data Protection Regulation’, adopted on 13 May 2013, <http://ec.europa.eu/just
ice/data-protection/article-29/documentation/other-document/files/2013/20130513_advice-paper-on-
profiling_en.pdf> accessed 11 November 2018.
82 See Recital 72 GDPR according to which ‘[p]rofiling is subject to the rules of this Regulation governing
the processing of personal data, such as the legal grounds for processing or data protection principles’.
83 Prior to the applicability of the GDPR, countries that specifically allowed for profiling mostly required
additional safeguards in this regard. Italy can be used as an example of a country which specifically
allowed for profiling, but the data subject had to be notified prior to processing of data aimed at profiling:
See Guidelines on online profiling issued by Garante per la protezione dei dati personali; for a summary see
<http://blogs.dlapiper.com/iptitaly/?p¼56970> accessed 26 May 2018. Moreover, some countries,
such as the Netherlands, even allowed for ethnic profiling, which may be problematic both from data pro-
tection and non-discrimination perspective: For more on this issue see Simone Vromen, ‘Ethnic Profiling
in the Netherlands and England and Wales: Compliance with International and European Standards’,
Public Interest Litigation Project (PILP-NJCM) / Utrecht University, <https://pilpnjcm.nl/wp-content/
uploads/2015/06/Research-project-B-FINAL-version.pdf> accessed 16 May 2018.
Maja Brkan  17

Should the EU Member States want to make use of fully automated systems of crime
prevention or determination of criminal sanctions, akin to systems COMPAS84 or

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


VALCRI,85 they remain free to adopt laws that would allow the use of such systems.

Safeguards in automated decision-making: From revealing the logic behind


the decision to an explanation of the decision
Safeguards in automated decision-making
Whenever an automated decision is allowed, the data subject has to be provided with
appropriate safeguards. The goal of such safeguards is to prevent wrong or discrimin-
atory decision or a decision that does not respect data subject’s rights and interests.
According to the GDPR, whenever an automated decision is authorised on the basis
of Union or Member State law, this law has to provide for ‘suitable measures to safe-
guard the data subject’s rights and freedoms and legitimate interests’ (Article
22(2)(b)). In other two examples—conclusion of a contract and explicit consent—
the GDPR equally requires such safeguards, but clarifies which minimum measures
should be provided for: the data subject shall have at least (1) the right to obtain
human intervention on the part of the controller, (2) to express his or her point of
view and (3) to contest the decision. The Directive on Data Protection in Criminal
Matters mandatorily requires merely that the Member States have to provide for the
right to human intervention, leaving the Member States the freedom to empower
the data subject with any other safeguards that they deem appropriate. This conclu-
sion can be discerned from the text of the Article 11 of the Directive according to
which the data subject has to have ‘at least the right to obtain human intervention on
the part of the controller’.86
The data subject therefore always has the right to obtain human intervention,
meaning that she can request that the fully automated decision becomes non-
automated through human intervention. For example, if an insurance risk is assessed
by automated means, the data subject can ask for a human assessment of the results
of such a decision. Comparably, in case of the Directive, if a traveller is arrested at
the border because the algorithm analysing different police databases flags her as sus-
pect of a recent terrorist attack, the suspect has the right to request that the decision
is reviewed by a human. The right to human intervention might pose practical diffi-
culties both for the data subject exercising her right and the human revising the

84 COMPAS stands for ‘Correctional Offender Management Profiling for Alternative Sanctions’; see for ex-
ample Fact Sheet ‘COMPAS Assessment Tool Launched – Evidence-based rehabilitation for offender
success’ <https://www.cdcr.ca.gov/rehabilitation/docs/FS_COMPAS_Final_4-15-09.pdf> accessed 29
June 2018. The system was used for example for a determination of a six years prison sentence of Eric
Loomis of Wisconsin who unsuccessfully challenged such determination of his sentence before the
Wisconsin Supreme Court. See ‘State v. Loomis: Wisconsin Supreme Court Requires Warning Before
Use of Algorithmic Risk Assessments in Sentencing. Recent Case: 881 N.W.2d 749 (Wis. 2016)’ (2017)
130 Harvard L Rev, 1530. The US Supreme Court declined to hear the case; see Michelle Liu, ‘Supreme
Court Refuses to Hear Wisconsin Predictive Crime Assessment Case’, Milwaukee Journal Sentinel, 26
June 2017 <https://eu.jsonline.com/story/news/crime/2017/06/26/supreme-court-refuses-hear-wiscon
sin-predictive-crime-assessment-case/428240001/> accessed 29 June 2018.
85 VALCRI stands for ‘Visual Analytics for Sense-Making in Criminal Intelligence Analysis’; <http://valcri.
org/about-valcri/> accessed 29 June 2018.
86 Emphasis added.
18  Algorithmic decision-making and data protection

decision. For example, if the data subject concludes an online contract with dynamic
pricing, it might be impossible to request a human intervention if the website does

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


not provide for that possibility. Moreover, according to the WP29, the human needs
to have both ‘authority and capability to change the decision’ and has to make ‘a
thorough assessment of all the relevant data’.87 While this approach is legally appro-
priate and societally desirable, it might present enormous difficulties in practice. In
particular, it remains unclear how a human with limited capacities of data analysis
will be able to justify that the final decision needs to be different from an algorithmic
one, given that the automated system might not only have taken into account the
data relating to the data subject affected by the decision, but a multitude of other
complex datasets. If the automated decision was a simple sum of data appertaining to
a particular data subject, an in-depth human review of automated decision would be
much more feasible. If, however, the decision is based on complex relations between
data in a Big Data environment, the human will have a much more difficult task in
reviewing such a decision. Finally, the willingness of a human to reappraise and mod-
ify the decision will greatly depend on her accountability for the final decision.
Increased accountability on the part of a human for a lack of intervention and even
her general accountability for correctness of a decision might be an incentive for a
more rigorous review. Nevertheless, the accountability should always take into ac-
count the reasons why a decision was incorrect: if the algorithm was deliberately
designed in a way to discriminate against a particular ethnic group, the accountability
should be ascribed to the developer of the algorithm and not to the human reviewing
the decision.
Furthermore, on the basis of the GDPR, the data subject also has the right to ex-
press her point of view, albeit it is not entirely clear what the legal consequence
should be if such an opinion is expressed. The most appropriate interpretation of
this right is that the controller should take the opinion of data subject into account
when assessing the automated decision and should have an obligation to respond to
the data subject’s point of view. Naturally, it is expected that this right will be exer-
cised together with the right to human intervention and most probably also with the
right to contest the decision. The possibility of the controller to ignore the opinion
of the data subject by not replying to it would render this right ineffective in practice.
However, this right should not open the possibility to the data subject to endlessly
delay the adoption of the decision. Hence, the right balance has to be struck between
this right and the necessity to adopt the decision.
Finally, the data subject has the right to contest the decision, a right that goes
hand in hand with her right to express her point of view. In practice that means that
the decision-making procedure becomes adversarial which raises the question as to
who should decide about such a challenge of automated decision. If, for example, a
data subject gave her explicit consent to automated assessment of her credit rating
and then contests such a decision, would this objection need to be dealt with by the
bank official handling the file, by another employee within this organisation or by an
independent body? The GDPR does not give any guidance in this regard which leads

87 WP29, Guidelines on Automated individual decision-making, 27.


Maja Brkan  19

to a reasonable assumption that it is the controller itself that needs to take this deci-
sion. Who exactly is in charge of handling such a challenge should be left to the con-

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


troller itself, depending on its organisational and decision-making structure. In any
event, if after controller’s assessment of the contested decision the data subject con-
siders that her rights under the GDPR have been infringed, she remains free to lodge
a complaint with supervisory authority88 or to institute judicial proceedings against
the controller.89
It is both curious and worrying that the Directive on Data Protection in Criminal
Matters does not give the data subject an express right to contest an automated deci-
sion, even though it leaves this possibility open to the Member States. Therefore, un-
less the Member State law implementing the Directive90 provides for such a right,
the data subject who wants to challenge the automated decision will need to address
herself directly to the supervisory authority or the court.91 In practice, the data sub-
ject might indeed have the possibility to contest such a decision when requesting
human intervention, but the competent authority will not have a distinct obligation
to address or reply to such a challenge. The legislative process reveals that the draft
Directive did not foresee any other obligatory safeguards than the one of human
intervention which was added in the first reading in the Council.92 The initial pro-
posal for the Directive contained an even more general formulation according to
which the Member States had to provide for ‘measures to safeguard the data subject’s
legitimate interests’,93 a phrase that remained unchanged in the first reading in the
European Parliament.94 The Member States, however, might depart from this narrow
formulation. The UK for example obliges the controller taking an automated deci-
sion for law enforcement purposes to first notify the data subject that the decision
has been taken exclusively by automated means, in reaction to which the data subject
has the right to ask the controller to ‘reconsider the decision’ or ‘take a new decision
that is not based solely on automated processing’.95

88 art 77 GDPR.
89 art 79 GDPR; art 82 GDPR provides for the right to compensation of damage incurred through infringe-
ment of the GDPR.
90 Or another law not specifically adopted for the implementation of the Directive.
91 arts 52 and 54 of the Directive.
92 art 11 of the Position of the Council at first reading with a view to the adoption of a Directive of the
European Parliament and of the Council on the protection of natural persons with regard to the process-
ing of personal data by competent authorities for the purposes of the prevention, investigation, detection
or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of
such data, and repealing Council Framework Decision 2008/977/JHA—Adopted by the Council on 8
April 2016, ST 5418 2016 REV 1 - 2012/010.
93 art 9(1) of the Proposal for a Directive of the European Parliament and of the Council on the protection
of individuals with regard to the processing of personal data by competent authorities for the purposes of
prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penal-
ties, and the free movement of such data, COM(2012) 10 final.
94 art 9(1) of the Report on the proposal for a directive of the European Parliament and of the Council on
the protection of individuals with regard to the processing of personal data by competent authorities for
the purposes of prevention, investigation, detection or prosecution of criminal offences or the execution
of criminal penalties, and the free movement of such data, COM(2012)0010 – C7-0024/2012 – 2012/
0010(COD).
95 art 50(2) of the UK Data Protection Act 2018.
20  Algorithmic decision-making and data protection

The existence of the right to explanation?

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


There is a vigorous discussion in the academic literature whether a data subject
should be given a right to explanation of automated decision-making within the
GDPR framework. Goodman and Flaxman ignited the debate by inferring such a
right from the requirement to give the data subject ‘meaningful information about
the logic involved’ (Articles 13 and 14).96 This claim was echoed in media97 and
other online sources98 in blogs,99 the doctrine100 as well by the Article 29 Working
Party.101 Wachter et al. responded to this claim by arguing that the GDPR only
requires an ex ante explanation of how the system functions and not an ex post ex-
planation of the reasons behind the decision.102 Differently, Edwards and Veale ac-
cept the possibility of the right to explanation, but point out practical difficulties
regarding its exercise from the perspective of machine learning algorithms.103
Mendoza and Bygrave equally put forward arguments in favour of the right to ex-
planation, in particular the circumstance that the text of the GDPR provision on the
right to access does not ‘necessarily exclude the possibility’ of an ex post explanation
and that such a right could be ‘implicit in the right “to contest” a decision’ from
Article 22(3) GDPR.104 Selbst and Powles argue, in response in particular to
Wachter et al., that the right of explanation should be ‘derived from Articles 13-15’
GDPR.105 Casey, Farhangi and Vogl restate the debate among core academic writ-
ings on the right to explanation and claim that the GDPR introduces ‘an unambigu-

96 Bryce Goodman and Seth Flaxman (n 7).


97 See for example Bill Brenner, ‘GDPR’s Right to Explanation: the pros and the cons’, SophosNews, 22
May 2017, <https://news.sophos.com/en-us/2017/05/22/gdprs-right-to-explanation-the-pros-and-the-
cons/; Dan Ming and William Turton, ‘What is GDPR? How the EU’s New Data Privacy Law gives
People more Power Online’ Vice News (25 May 2018) <https://news.vice.com/en_ca/article/8xexeg/
what-is-gdpr-how-the-eus-new-data-privacy-law-gives-people-more-power-online>; Andrew Burt and
others, ‘Can People Trust the Automated Decisions Made by Algorithms?’ InfoQ (21 June 2018)
<https://www.infoq.com/articles/Can-People-Trust-Algorithm-Decisions> documents accessed 6 July
2018.
98 See for example Wikipedia under notion ‘Right to explanation’ <https://en.wikipedia.org/wiki/Right_
to_explanation#cite_note-:2-8> accessed 6 July 2018.
99 See for example Andrew Burt, ‘Is there a “right to explanation” for Machine Learning in the GDPR?’
iapp: Privacy Tech <https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-
the-gdpr/>; JM Porup, ‘What Does the GDPR and the “right to explanation” Mean for AI?’ <https://
www.csoonline.com/article/3254130/compliance/what-does-the-gdpr-and-the-right-to-explanation-
mean-for-ai.html>; Bryan Ware, ‘Is the “right to explanation” in Europe’s GDPR a Game-changer for
Security Analytics?’ <https://www.csoonline.com/article/3251727/regulation/is-the-gdpr-s-right-to-ex
planation-a-game-changer-for-security-analytics.html>; Eve Rajca, ‘Right to Explanation: a Right that
Never Was (in GDPR)’ <http://datawanderings.com/2018/03/01/right-to-explanation/>; documents
accessed 6 July 2018.
100 See the doctrinal debate a little further in this section.
101 WP29, Guidelines on Automated individual decision-making.
102 Wachter, Mittelstadt and Floridi (n 23) 76–99.
103 Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a “right to an explanation” is Probably
not the Remedy you are Looking For’ (2017) 16 Duke Law & Technology Review 1, 44ff.
104 Mendoza and Bygrave (n 31) 16.
105 Andrew D. Selbst and Julia Powles, ‘Meaningful information and the right to explanation’ (2017) 7
International Data Privacy Law 4, 237.
Maja Brkan  21

ous “right to explanation”’.106 For reasons elaborated on below, this author considers
that the GDPR provisions should be interpreted in a way so as to provide the data

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


subject with a right to explanation. This, however, raises several questions: what
exactly needs to be revealed to the data subject? What exactly does this right entail
and how detailed does the explanation have to be?
It is to be noted that a right explicitly named a ‘right to explanation’ is not men-
tioned either in Article 22 or in Articles 13–15 GDPR on notification duties and ac-
cess. In case of automated decisions involving personal data of the data subject, the
GDPR obliges the controller to provide the data subject with ‘meaningful informa-
tion about the logic involved, as well as the significance and the envisaged conse-
quences of’ automated decision-making for the data subject, regardless of whether
personal data is collected from the data subject107 or from other source.108
Moreover, within the framework of the right to access, the GDPR provides for a
similar right of the data subject to receive not only information on the existence of
automated decision-making, but also ‘meaningful information about the logic
involved, as well as the significance and the envisaged consequences of such process-
ing for the data subject’.109 The only instance where the right of explanation is expli-
citly mentioned in the GDPR is its Recital 71, according to which processing under
Article 22 ‘should be subject to suitable safeguards, which should include . . . the
right to obtain human intervention, to express his or her point of view, ‘to obtain an
explanation of the decision reached after such assessment’ and to challenge the
decision.’110
Given both the absence of a blackletter right to explanation and yet multiple pos-
sibilities to link it to the text and purpose of the abovementioned GDPR provisions,
the basic dilemma that the overview of the literature reveals is the quest whether the
so called ‘right to explanation’ would be a right that is read into another existing
GDPR right, such as the right to information or access, or whether such a ‘right to
explanation’ could potentially be created in addition to other existing rights from the
binding provisions of the GDPR. In the first case, the so called ‘right to explanation’
would be a product of interpretation of Articles 13–15, a claim supported by Selbst
and Powles,111 whereas in the second case, such a right would be the outcome of a
judicial creation, an option put forward by Wachter et al.112
We would like to offer an alternative account to these two approaches, that is, a
methodological approach that allows for interpretation of several GDPR provisions
together. This alternative approach has two advantages compared to other interpret-
ative approaches. First, it does not only take into account the wording of the provi-
sions, but allows to take into account also the broader purpose of the

106 Bryan Casey, Ashkon Farhangi and Roland Vogl, ‘Rethinking Explainable Machines: The GDPR’s
“Right to Explanation” Debate and the Rise of Algorithmic Audits in Enterprise’ Berkeley Technology
Law Journal, forthcoming <https://ssrn.com/abstract¼3143325> accessed 7 July 2018.
107 art 13(2)(f) GDPR.
108 art 14(2)(g) GDPR.
109 art 15(1)(h) GDPR.
110 Emphasis added.
111 Selbst and Powles (n 105) 235–37.
112 Wachter, Mittelstadt and Floridi (n 23) 91.
22  Algorithmic decision-making and data protection

abovementioned provisions. Especially Wachter et al. limit themselves, in a very nar-


row fashion, only to the textual interpretation of the relevant provisions, leaving no

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


space for either purposeful or systematic interpretative methods regularly relied on
by the CJEU. Consideration of the text is just the first step in the interpretative
endeavours and it is precisely the latter two types of interpretation that bear the
most weight in the Court’s jurisprudence.113 The second advantage of this approach
is linked to the first: the joint interpretation of several provisions follows the method
used by the CJEU. Such methodological grouping of different data protection provi-
sions in order to interpret a certain right of data subject is not unusual in the case
law of the Court. For example, in Google Spain, the Court relied on the combination
of the right of access and the right to object from Directive 95/46114 to create the
right to erasure (popularly described as ‘the right to be forgotten’).115 In the light of
that, we submit that the provisions of the GDPR, more precisely, Articles 13(2)(f),
14(2)(g) and 15(1)(h) GDPR, in combination with Article 22, read in the light of
Recital 71, should be interpreted in a way that they give the data subject the right to
be informed about the reasons for the automated decision having legal or significant
effect on her.
It needs to be clarified that we do not believe it important—and consider it even
somewhat unsuitable—to designate this right specifically and exclusively as a ‘right
to explanation’. Selbst and Powles correctly point out that the name of this right ‘is
less important than the substance of the right itself’.116 In the academic literature,
the designation of the right is often used as an abstract notion and it is usually not
fleshed out what exactly is meant by an ‘explanation’. Such an explanation can have
very different scopes which might depend on the level of detail of the data provided,
the type of decision taken and its binding nature and the type of algorithm used to
reach a decision. In an ideal world where all algorithms are white boxes, the informa-
tion given to the data subject would ideally comprise: (a) information about the data
that served as the input for automated decision, (b) information about the list of fac-
tors that influenced the decision, (c) information on the relative importance of fac-
tors that influenced the decision, and (d) a reasonable explanation about why a
certain decision was taken.117 In reality and given technical obstacles for (b) and (c),
such a right to explanation would probably encompass ‘only’ information explaining
crucial reasons for decisions. In the framework of automated decision-making, it is
crucial that the data subject understands why a particular decision has been taken, re-
gardless of how the right to obtain such information is called. Therefore, rather than
designating this right as the ‘right to explanation’, it seems more appropriate to flesh

113 Case C-304/02 Commission v France ECLI:EU:C:2005:444, can be brought forward as an extreme ex-
ample where the purposeful interpretation led the Court to interpret the word ‘or’ as having a cumula-
tive sense (‘and’); see para 83 of the judgment.
114 Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protec-
tion of individuals with regard to the processing of personal data and on the free movement of such
data, OJ L 281, 23.11.1995, 31.
115 More precisely, the CJEU relied on art 12(b) and subparagraph (a) of the first paragraph of art 14 of
Directive 95/46; Case C-131/12 Google Spain and Google ECLI:EU:C:2014:317.
116 Selbst and Powles (n 105) 242.
117 Compare WP29, Guidelines on Automated individual decision-making, 27, in particular on the ‘weight’
of factors ‘on an aggregate level’.
Maja Brkan  23

out its content and call it the right of the data subject to be informed about the rea-
sons for the automated decision. In order for this right to be of relevance to the data

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


subject, it is not enough to ensure merely what Kroll et al. term ‘procedural regular-
ity’.118 Such procedural regularity ensures only that the decisions are based on the
same decision policy, that the policy was determined before knowing the inputs and
that the outcomes can be reproduced.119 It therefore addresses only aggregate pro-
cedural regularity of all cases, safeguarding that cases are decided upon the same
rules. However, the concept of procedural regularity does not answer the question of
why an algorithm reached a certain decision with a certain dataset as an input. The
right required by the GDPR is of a different kind: the data subject has to understand
the reasons behind the decision. Obviously, the interpretative approach and the out-
come suggested above cannot be sustained without a detailed analysis of the provi-
sions on which the suggested result relies.
First, the analysis of Articles 13(2)(f), 14(2)(g) and 15(1)(h) GDPR is crucial to
establish whether and when the data subject should be informed about such reasons.
As mentioned above, these three provisions require the controller to inform the data
subject about ‘meaningful information about the logic involved, as well as the signifi-
cance and the envisaged consequences of such processing for the data subject’. It is
submitted that both textual as well as teleological (purpose-based) interpretation of
this phrase support the view that the data controller needs to inform the data subject
about the reasons for the decision taken. The logic involved into the automated deci-
sion is ‘meaningful’ only if the data subject can understand the factors and considera-
tions on which the decision was based. An abstract understanding of the system or
the functioning of an algorithm will not be of much use to the data subject, especially
if the decision rejects her request, for example, for a loan or a credit card.
With regard to the timing of providing this information, Wachter et al. point out
concerning Articles 13(2)(f) and 14(2)(g) that the notification in these cases ‘occurs
before a decision is made’.120 This argument seems to be based only on the text of
the introductory sentence (chapeau) of Article 13(2) which requires that the infor-
mation is provided to the data subject ‘at the time when personal data are obtained’.
However, a similar phrase cannot be found in Article 14(2) which makes this argu-
ment flawed for cases when the data is not obtained from the data subject.
Moreover, both Articles 13(2)(f) and 14(2)(g) refer to the ‘existence of automated
decision-making’121 which seems to imply that the decision-making is already taking
place. If the legislator envisaged, especially in case of Article 14(2)(g), to unambigu-
ously convey the message that the information has to be provided to the data subject
prior to taking an automated decision, it would have used the wording ‘intended
automated decision-making’ or a similar phrase. The interpretation of at least this lat-
ter provision therefore does not preclude the possibility of providing the reasons for
the automated decision to the data subject. This interpretation is supported by the
opinion of the WP29 which points out that the information given to the data subject

118 Joshua A Kroll and others (n 18) 637, 638, 641, 656ff.
119 ibid 657.
120 Wachter, Mittelstadt and Floridi (n 23) 82, emphasis omitted.
121 Emphasis added.
24  Algorithmic decision-making and data protection

should ‘be sufficiently comprehensive for [her] to understand the reasons for the
decision’.122

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


Equally important, Article 15 does not specify the timing when the information
has to be conveyed to the data subject. As Wachter et al. admit themselves, the ac-
cess to the data and the information need to be transmitted to the data subject upon
her request which can be made at any time, even (and we could add, especially) after
the decision was taken.123 Their coming to this initial conclusion and then insisting
on the claim that this provision gives the data subject only the right to ex ante infor-
mation124 makes their reasoning not only contradictory, but also conflicting with the
entire system of data subjects’ rights. Given its wording and purpose, Article
15(1)(h) has to be interpreted from the perspective of the possibility of data subject
requesting access to data and information after the automated decision was taken.
Within this context, this provision would be devoid of purpose if the information
requested by the data subject would not contain the reasons for the decision, a view
fully confirmed by the opinion of the WP29.125 The fact that Article 15(1)(h) talks
about ‘envisaged’ consequences does not change this conclusion; in fact, it even rein-
forces it as the data controller, even after taking the decision, might not be fully
aware of the panoply of consequences that the automated decision will cause to the
data subject and can only give her information on the consequences that it can envis-
age at the moment of transmitting the information. Data controller might be aware
of the legal consequences for the data subject, such as a decision to impose tax, but
less so of other significant consequences. For example, an automated decision refus-
ing university admission might not have significant consequences for a candidate
who was admitted to other universities, but will certainly have such consequences for
a disabled candidate whose only choice is to study at that particular university which
is the only one in the vicinity offering facilities for disabled students. Since the con-
troller cannot be fully aware of data subject’s circumstances, it is only possible for it
to give the data subject information about ‘envisaged’ circumstances.
Second, the information to the data subject on the reasons behind the decision
needs to enable her to express her point of view and to contest the automated deci-
sion, that is, to effectively exercise her safeguards from Article 22(3) GDPR.126 In
the absence of the data subject’s understanding of reasons, her right to contest the
decision taken by automated means would be entirely ineffective.127 If the data sub-
ject wants to substantively contest such a decision, she needs to obtain information
at least about the data that was used as an input for automated decision and a reason-
able explanation of grounds for the decision. The right to contest a decision is intim-
ately linked to the substance of the decision and it would be an empty shell if the

122 WP29, Guidelines on Automated individual decision-making, 25.


123 Wachter, Mittelstadt and Floridi (n 23) 83; further confirmed by Edwards and Veale (n 103) 52.
124 Wachter, Mittelstadt and Floridi, ibid 83, 84.
125 See WP29, Guidelines on Automated individual decision-making, 27, where the WP29 opines that, on
the basis of art 15(1)(h) GDPR, the controller needs to inform the data subject about ‘factors taken
into account for the decision-making process, and on their respective “weight” on an aggregate level’.
126 Selbst and Powles (n 105) 236.
127 See also Mendoza and Bygrave (n 31) 16.
Maja Brkan  25

data subject was faced merely with a final decision without any reasons relating to
why such a decision was taken.

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


Third, much of the debate around the absence of the ‘right to explanation’ is
focused on the argument that this right does not expressly appear in the text of the
GDPR and that Recital 71 GDPR containing that right is not legally binding.128
While the legislative procedure shows that the reference to the right to explanation
was indeed omitted from the text of Article 22,129 the circumstance that this right
still appears in the recital demonstrates that the legislator did not want to do away
with it entirely. Putting the right to explanation into the recital seems to be a com-
promise solution, born from a disagreement on whether this right should be
enshrined in the legislative part of the GDPR or not.130 Drafting legislation in such a
way also demonstrates that the legislator seemed to intentionally leave the final deci-
sion on the existence of this right to the CJEU, which is, as it has been repeatedly
demonstrated in the recent case law, rather purposeful and activist when interpreting
data protection legislation. Dismissing the possibility of the existence of the right of
data subject to obtain reasons for her decision altogether because recitals are not le-
gally binding is too formalistic, in particular in the light of the Court’s case law which
regularly uses recitals as an interpretative aid.131 A closer look into the case law of
this court reveals that, indeed, ‘the preamble to a European Union act has no binding
legal force’ and cannot be used to derogate from the actual provisions or ‘for inter-
preting those provisions in a manner clearly contrary to their wording’.132 It thus
stems from the case law that recitals cannot be used for a contra legem interpretation
of EU law provisions. However, relying on an approach of interpreting various
GDPR provisions together as suggested above and the use of Recital 71 to strength-
en the interpretation supporting the existence of the right of data subject to obtain
reasons for the automated decision would not lead to a contra legem interpretation.
Rather, it would serve as a means to resolve ambiguity resulting from a joint reading
of several relevant GDPR provisions.133 This reasoning does certainly not imply that
the said recital would become legally binding, but rather that it would support and
strengthen the interpretation of other provisions.
Fourth, it is also important to note the subtleties of the wording of relevant provi-
sions. A careful reading of Article 22(3) reveals that the safeguards from this provi-
sion (human intervention, expression of point of view, contesting) are not
necessarily the only possible safeguards. This provision namely requires that the data

128 Wachter, Mittelstadt and Floridi (n 23) 80.


129 ibid 81.
130 Compare Edwards and Veale (n 103) 33.
131 See, for example, Case C-283/16 M.S. ECLI:EU:C:2017:104, paras 34–35; Case C-436/16 Leventis and
Vafias ECLI:EU:C:2017:497, para 33; Case C-578/16 PPU C. K., H. F., A. S. ECLI:EU:C:2017:127,
para 43; Case C-111/17 PPU OL ECLI:EU:C:2017:436, para 40.
132 Case C-308/97 Manfredi ECLI:EU:C:1998:566, para 30; Case C-136/04 Deutsches Milch-Kontor
ECLI:EU:C:2005:716, para 32; Case C-134/08 Tyson Parketthandel ECLI:EU:C:2009:229, para 16;
Case C-7/11 Caronna ECLI:EU:C:2012:396, para 40; Case C-345/13 Karen Millen Fashions
ECLI:EU:C:2014:2013, para 31.
133 The role of recitals is to resolve ambiguity in legislative provisions; see Tadas Klimas and Jurate
Vaiciukaitè, ‘The Law of Recitals in European Community Legislation’ (2008) 15 ILSA Journal of
International & Comparative Law 26.
26  Algorithmic decision-making and data protection

subject is guaranteed ‘at least’ those safeguards. Nothing in the wording of Article
22(3) would oppose adding an additional safeguard that stems from other provisions

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


of the GDPR; to the contrary, such an additional safeguard would ensure a high level
of protection of data subjects and thus serve the overall objective of the GDPR.134
Moreover, it has also been claimed that Articles 13(2)(f), 14(2)(g) and 15(1)(h)
GDPR omit to refer directly to Article 22(3) GDPR on safeguards, but only make ref-
erence to Article 22(1) and (4).135 The gist of the argument is that the former provi-
sions cannot guarantee additional safeguards for the purposes of Article 22(3). This
argument, resting upon a rigid and exclusively text-based interpretation of the former
provisions, is flawed for two reasons. On the one hand, the wording of the former pro-
visions requires that the data subject is provided with meaningful information about
the logic involved in automated decision ‘at least’ in cases of Article 22(1) and (4).136
It seems that the wording was explicitly left open not to preclude a possibility of giving
the data subject further safeguards in other cases than those explicitly mentioned. On
the other hand, the teleological and systematic interpretations need to be taken into ac-
count due to the intimate link between paragraph 1 of Article 22 and paragraphs 2 and
3 of this provision. Given that paragraph 1 regulates only prohibited types of automated
decisions, informing the data subject about the ‘logic involved’ and ‘envisaged conse-
quences’ of automated decisions that are not allowed to be taken is somewhat illogic-
al.137 For that reason, paragraph 1 of this provision should be read in combination
with its paragraphs 2 and 3 which regulate automated decisions that are permitted
under the GDPR. Following this reasoning, the rights from Articles 13(2)(f), 14(2)(g)
and 15(1)(h) GDPR should be guaranteed also in case of decisions taken on the basis
of Article 22(2) GDPR.
Fifth, a more general argument that is often sidelined in the discussions on the right
to explanation is the argument of the GDPR’s quest for a high level of transparency
which requires that the processing of personal data should be transparent to natural
persons whose personal data are ‘collected, used, consulted or otherwise processed’.138
Even though the right to explanation cannot be based exclusively on the transparency
requirement, all the other provisions on which the right to explanation can be based
should be interpreted in the light of high demands of transparency.139 The principle of

134 Guaranteeing this additional safeguard is of course independent from data controllers’ possibility to vol-
untarily offer additional safeguards, as advocated by Wachter, Mittelstadt and Floridi (n 23) 91.
135 ibid 82.
136 Edwards and Veale (n 103) 53.
137 It seems that arts 13(2)(f), 14(2)(g) and 15(1)(h) GDPR would apply to a decision that would not ful-
fil the criteria from art 22(1) and hence be allowed, for example a decision not having a legal or signifi-
cant effect on the data subject. If the provisions are read in that way, it seems even more unreasonable
not to guarantee these rights for automated decisions taken on the basis of art 22(2).
138 Recital 39 GDPR.
139 On the importance of transparency in the GDPR, see for example Sofia Olhede and Russell Rodrigues,
‘Fairness and Transparency in the Age of the Algorithm’ (2017) Significance 8–9; Merle Temme,
‘Algorithms and Transparency in View of the New General Data Protection Regulation’ (2017) 3
European Data Protection Law Review 4, 473; Antoni Roig, ‘Safeguards for the Right not to be Subject
to a Decision Based Solely on Automated Processing (Article 22 GDPR)’ (2017) 8 European Journal of
Law and Technology 3, 1–17; Emre Bayamlıoglu, ‘Transparency of Automated Decisions in the GDPR:
An Attempt for Systemisation’ <https://ssrn.com/abstract¼3097653> accessed 10 July 2018.
Kaminski points out that the ‘right to explanation is far from the only transparency right’ in the GDPR;
Maja Brkan  27

transparency of data processing, epitomised in Article 5(1)(a) and further regulated in


Article 12 GDPR, requires not only that the information to the data subject is ‘concise,

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


easily accessible and easy to understand’,140 but also that the data subject is informed
‘of the existence of the processing operation and its purposes’.141 Given the circum-
stance that the transparency within the GDPR relates to the particular individual and
not to the society at large, it can be understood as ‘individual transparency’ as it, in
principle, gives the data subject rights of access and understanding the reasons behind
a decision in case of automated processing. EDPS correctly points out that it is not up
to individuals to seek the disclosure of such logic, but that the organisations have to
proactively seek for such transparency.142
As the technology advances and the use of algorithms for decision-making is
exponentially growing, both the legal regulation and academic work call for more
transparent algorithmic decision-making, often described with the buzzword ‘algo-
rithmic transparency’.143 The basic quest of proponents of algorithmic transparency
is to reveal the logic behind the algorithm that adopts a certain decision. The individ-
ual transparency of simple automated decisions will not pose particular problems
regarding the explanation of the logic behind the decision. For example, if a camera
detecting the speed of the driver communicates to the public authorities that the
speed limit was exceeded, the issuing of speeding ticket follows automatically. The
logic behind the decision as well as the rule on which the decision is based can be
easily explained to the data subject: speeding ticket is issued if the speed limit is
exceeded. Differently, automated decision-making based on complex algorithms faces
numerous complications when it comes to the explanation of the reasons underlying
a decision. While some commentators consider that it is near to impossible to ex-
plain an algorithm because even its developers cannot exactly pinpoint the reasons
why a particular decision was taken,144 others take a more optimistic approach145
see Margot E Kaminski, ‘The Right to Explanation, Explained’ (2018) <https://doi.org/10.31228/osf.
io/rgeus> accessed 10 July 2018, 6.
140 Recital 58 GDPR.
141 Recital 60 GDPR.
142 European Data Protection Supervisor, ‘Opinion 7/2015. Meeting the challenges of big data. A call for
transparency, user control, data protection by design and accountability’ <https://secure.edps.europa.
eu/EDPSWEB/webdav/site/mySite/shared/Documents/Consultation/Opinions/2015/15-11-19_Big_
Data_EN.pdf> accessed 15 November 2018.
143 See for example Anupam Datta, Shayak Sen and Yair Zick, ‘Algorithmic Transparency via Quantitative
Input Influence: Theory and Experiments with Learning Systems’ 2016 IEEE Symposium on Security
and Privacy, <https://ieeexplore.ieee.org/stamp/stamp.jsp?tp¼&arnumber¼7546525> accessed 10
December 2017; Robert Brauneis and Ellen P Goodman, ‘Algorithmic Transparency for the Smart City’
(2018) 20 Yale Journal of Law & Technology 103–76; Nicholas Diakopoulos and Michael Koliska,
‘Algorithmic Transparency in the News Media’ (2017) 5 Digital Journalism 7, 809–28 <https://doi.
org/10.1080/21670811.2016.1208053> accessed 10 July 2018; Zachary C Lipton, ‘The Mythos of
Model Interpretability’ (2017) <arXiv:1606.03490v3> accessed 10 July 2018; Mike Ananny and Kate
Crawford, ‘Seeing without Knowing: Limitations of the Transparency Ideal and its Application to
Algorithmic Accountability’ (2018) 20 New Media & Society 3, 974, 977.
144 See for example Cade Metz, ‘Artificial Intelligence is setting up the Internet for a huge clash with Europe’
<https://www.wired.com/2016/07/artificial-intelligence-setting-internet-huge-clash-europe/> accessed 10
September 2018.
145 Kroll and others (n 18) 54ff, systematically analyse what types of explanation might be possible and div-
ide the types of explanation into ‘model-centric’ and ‘subject-centric’ explanations; the former focuses
on the model used for the decision and the general logic it uses to take the decision, whereas the latter
28  Algorithmic decision-making and data protection

and even propose technical solutions146 that would lead to a higher algorithmic
transparency. Algorithmic transparency is deemed to cover different transparency

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


degrees from revealing a source code to an explanation of its functioning. For the
purposes of this article, we believe that the algorithmic transparency, legally speaking,
should encompass transparency of the process of algorithmic decision-making to the
extent that this is necessary to ensure the respect of other rights under the GDPR,
notably the information to the data subject about meaningful information about the
logic involved enshrined in Articles 13–15.147
Sixth, understanding the reasons behind a decision is a necessary tool for the
prevention of discrimination in algorithmic decision-making. The decisions taken by
algorithms are sometimes discriminatory, even without the algorithm being pro-
grammed to discriminate. For example, the decision can be discriminatory because
the data on which the decision is based is discriminatory in itself. This can arise, in
particular, when sensitive data such as race or ethnic origin are involved in decision-
making. In this regard, Zarsky points out that ‘a skewed and biased data set may
cause outcomes of the algorithm process that discriminate against protected
groups’.148 The decision may be biased also if the algorithm is trained on biased
data.149 However, other authors claim that, in order to avoid algorithmic discrimin-
ation, it is necessary to use sensitive personal data in the process of building of
decision-making models.150 In any event, when the algorithm takes the decision, sen-
sitive personal data such as race should not be required as ‘input variables’ relevant
for decision-making.151 Apart from discrimination due to biased entry datasets, algo-
rithmic decision can be discriminatory also due to biased programming of the algo-
rithm, leading to discriminatory decisions. Therefore, understanding the reasons
behind biased decisions would therefore be the first step towards preventing such al-
gorithmic discrimination.
Sixth, even if a general right to receive reasons for a decision is not recognized,
this right should exist at least in case an automated decision is based on sensitive
data. In principle, automated decisions should not be based on special categories of
personal data (Article 22(4)), except if the data subject gives explicit consent to
processing for specified purposes or if such processing is necessary to safeguard an
important public interest (Article 9(2)(a) and (g)). Article 22(4) remains rather
vague when it comes to safeguards, stating that ‘suitable measures to safeguard the
data subject’s rights and freedoms’ should be in place. However, as mentioned above,
all the GDPR provisions requiring that the data subject should be familiarized with
puts into the focus the subject itself, together with the question of ‘’why’ a particular prediction was
made (55).
146 Datta, Sen and Zick (n 143).
147 More precisely, arts 13(2)(f), 14(2)(g) and 15(1)(h).
148 Tal Zarsky, ‘The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and
Fairness in Automated and Opaque Decision Making’ (2016) 41 Science, Technology, & Human
Values 126.
149 Bryce Goodman, ‘Discrimination, Data Sanitisation and Auditing in the European Union’s General Data
Protection Regulation’ (2016) 2 European Data Protection Law Review 4, 498.
150  liobait_e and Bart Custers, ‘Using Sensitive Personal Data may be Necessary for Avoiding
Indr_e Z
Discrimination in Data-driven Decision Models’ (2016) 24 Artificial Intelligence and Law 183–201.
151 ibid.
Maja Brkan  29

the logic involved (Articles 13(2)(f), 14(2)(g) and 15(1)(h)), expressly require that
this should be guaranteed in case automated decision is based on sensitive data.

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


What needs to be equally taken into account is not only the textual interpretation of
these provisions, but also the purpose of high level of data protection when it comes
to sensitive data. If the inclusion of sensitive data leads to a biased decision, the data
subject should be able to understand the reasons behind such a decision. Her rights
would not be sufficiently safeguarded if she would only receive general information
about the functioning of the system.
Finally, turning to the Directive on Data Protection in Criminal Matters, it is both
surprising and worrying that it does not contain any similar safeguards or rights that
would enable the data subject to understand the reasons behind automated decision-
making. The only safeguard that the Directive prescribes is the data subject’s right to
obtain human intervention; all other potential additional rights are left to the
Member States, given that Article 11(1) does not preclude such additional rights by
requiring ‘at least’ the right to human intervention. This minimalistic approach of the
Directive is troublesome given the impact of the automated decisions in criminal
matters on individuals. A data subject can, for example, be refused to board a plane
because she is, according to the outcome of algorithmic decision-making, linked to
terrorist activities. If the Member State law allowing for such decisions gives the data
subject only the possibility of requesting that a human reviews such a refusal, the
data subject will never understand why such a decision was taken.
As mentioned above, algorithmic decisions might be biased and lack of appropri-
ate safeguards can reinforce such biases with a regular use of automated decisions.
True, Article 11(2) of the Directive does prohibit automated decisions on the basis
of sensitive data such as race or ethnic origin, but in a Big Data environment different
types of data are usually mixed and used at the same time. Decisions on the basis of
sensitive data are only allowed if ‘suitable measures to safeguard data subject’s rights
and freedoms’ are provided and it can be argued that this standard of protection
applies not only when a decision is based exclusively on sensitive data, but whenever
sensitive data is mixed with other data in the process of automated decision-making.
Nevertheless, the absence of concrete safeguards, in particular the right of data sub-
ject to contest the decision and the related right to be informed about the reasons
for decision, is problematic and might even violate the fundamental rights standard
from Article 47 of the EU Charter of Fundamental Rights152 which provides for a
fundamental right to an effective remedy. If the data subject does not understand the
reasons behind the decision (for example a decision to arrest her), she is also not in
a position to bring an effective remedy against such a decision. Even if the decision is
taken with a human intervention, the human would still need to provide the data
subject with reasons, giving her an opportunity to effectively challenge the decision.

CO NC L US I ON A ND FU RT HE R CH A L LE NG ES
This article analyses rules of the GDPR and the Directive on Data Protection in
Criminal Matters regarding automated decision making. It is establishes that, while
these rules clearly give the right to the data subject not to be subjected to a fully

152 Charter of Fundamental Rights of the European Union, OJ C 326, 26.10.2012, 391.
30  Algorithmic decision-making and data protection

automated decision, including profiling, the exceptions to this right hollow it out to
the extent that the exceptions themselves become a rule. Globally, and especially for

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


non-EU businesses covered by the scope of application of the GDPR, the most rele-
vant exception will probably be the exception allowing for automated decisions in
case of a conclusion of a contract as many EU citizens conclude online contracts
with businesses in US and other third countries. The Member States or the Union it-
self might provide for further exceptions to allow for a broader use of automated de-
cision-making.
The article further argues that the data subjects should have the right to familiar-
ize themselves with the reasons why a particular decision was taken. The semantic
discussion of whether this right should be designated as the ‘right to explanation’ is
deliberately left aside as the analysis focuses rather on the content of the right and
the question of which information needs to be provided to the data subject. Despite
voices expressed against what is popularly called the ‘right to explanation’,153 it is
submitted that a purposeful interpretation of GDPR provisions, coupled with a
broader quest of a high level of transparency of data processing, shows that the data
subject should have the right to know the reasons behind the decision, especially if
she is to effectively exercise her other safeguards provided by the GDPR.
Regrettably, the Directive on Data Protection in Criminal Matters does not provide
for such a right which puts into question the compatibility of its provision on auto-
mated decision-making with the EU Charter of Fundamental Rights.
We believe it appropriate to briefly mention also further challenges that the exer-
cise of the right to obtain reasons for the decision might face in practice. The explor-
ation and surmounting of these challenges requires further in-depth research that
goes beyond the scope of this article. Even though we consider having convincingly
argued that the data subject should have the right to be informed about the reasons
of automated decision, regardless of whether we name it the ‘right to explanation’ or
not, it remains unclear whether and how this right can be effectively exercised in
practice. Three main obstacles stand in a way of providing data subject with a mean-
ingful explanation of logic behind algorithmic decisions: technical obstacles, intellec-
tual property obstacles as well as state secrets and other confidential information of
state authorities.154
Among those, the most difficult to surmount are technical obstacles. The amount
of technical obstacles standing in a way of explaining algorithmic-based autonomous
decisions depends on the complexity of an algorithm. For example, the reasons for a
decision on the basis of a simple decision tree could perhaps still be explained.155
However, if the algorithm used for decision-making is a neural network, prone to

153 See, in particular, Wachter, Mittelstadt and Floridi (n 23) 76–99.


154 Burell distinguishes between three types of opacity of algorithms: corporate or state secrecy; technical il-
literacy; and opacity arising from characteristics of machine learning; see Jenna Burrell, ‘How the
Machine “thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) Big Data & Society
1–12.
155 Decision trees, which are a form of reasoning or decision-support that use a graph similar to a tree to
reach a conclusion, were the main decision-making tools until 1980s and are still used nowadays. Stuart
J Russell and Peter Norvig (eds), Artificial Intelligence. A Modern Approach (3rd edn, Pearson 2010) 638.
Maja Brkan  31

very fast machine learning,156 it will be close to impossible to explain the reasons be-
hind its decision.157 It seems that, in order to reach the transparency of an algorithm,

Downloaded from https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eay017/5288563 by EKU Libraries user on 16 January 2019


further technology will have to be developed to clarify which factors were taken into
account and what was their weight.158
Further research is needed both in the field of AI as well as in the field of law to
reach an optimal solution for understanding of automated decisions. While the
researchers in AI need to try to find technical solutions for an easier explainability of
such decisions, legal researchers should try to find the right balance between differ-
ent interests involved in automated decision-making. Finally, data controllers should
be aware of their obligations under EU data protection legislation and the account-
ability in case of failure to provide the data subjects with information required by this
legislation.

156 Masnick claims that the faster the machine learns, the more difficult it is to understand the reasons be-
hind its decisions; Mike Masnick, ‘Activists Cheer On EU’s “Right To An Explanation” For Algorithmic
Decisions, But How Will It Work When There’s Nothing To Explain?’ <https://www.techdirt.com/
articles/20160708/11040034922/activists-cheer-eus-right-to-explanation-algorithmic-decisions-how-will
-it-work-when-theres-nothing-to-explain.shtml> accessed 10 January 2018.
157 Metz points out that ‘[d]eep neural nets depend on vast amounts of data, and they generate complex
algorithms that can be opaque even to those who put these systems in place.’ See Cade Metz (n 142).
Compare also Bryce Goodman and Seth Flaxman (n 7).
158 Compare ‘Artificial Intelligence, Robotics, Privacy and Data Protection’ Room document for the 38th
International Conference of Data Protection and Privacy Commissioners, October 2016.

You might also like