2021 - Law, Technology and Innovation v. II
2021 - Law, Technology and Innovation v. II
2021 - Law, Technology and Innovation v. II
experteditora.com.br
[email protected]
Dr. Eduardo Goulart Pimenta
Adjunct Professor at UFMG and PUC/MG
COORDINATOR’S NOTE............................................................. 17
Introduction.............................................................................. 28
Logic, Informatics, Artificial Intelligence And Technology In Law:
History And Challenges............................................................. 28
1. Modern A.I. and Neural Networks........................................... 28
2. The Legal Reasoning.............................................................. 31
3.1. The Logic Of Legal Reasoning.............................................. 32
3.2. The Legal Decision And Its Particular Problems ( judicial
Decision And Legislative Decision.)............................................ 34
3.3. The Argumentation............................................................. 36
4. Innovative Technologies Regarding A.i. And Legal Reasoning... 39
5. New Projects: Research And Practice...................................... 43
6. Some Comments About The Fear That Artificial Intelligence,
Overcoming The Natural One, Dominates Our Actions................. 45
7. In Conclusion........................................................................ 47
Eduardo Magrani
New Perspectives On Ethics And The Laws Of Artificial Intelligence
Introduction.............................................................................. 49
1. Technology Is Not Neutral: Agency And Morality Things .......... 50
2. Technical Artifacts And Sociotechnical Systems: Entangled In
Intra-relation............................................................................ 52
3. Hello World: Creating Unpredictable Machines........................ 53
4. Application Of Norms: Mapping Legal Possibilities .................. 57
5. How To Deal With Autonomous Robots: Insufficient Norms And
The Problem Of ‘Distributed Irresponsibility’.............................. 60
6. ‘With a Little Help From My Friends’: Designing Ethical
Frameworks To Guide The Laws Of A.i........................................ 62
7. Robot Rights: Autonomy And E-personhood............................ 65
8. Governing Intra-action With Human Rights And Design........... 72
Conclusion................................................................................ 75
References................................................................................ 76
Introduction................................................................................80
2. Statistical Learning Theory.......................................................82
3. Discussion...............................................................................83
4. Conclusion..............................................................................86
References..................................................................................87
Law.......................................................................................... 89
Introduction.............................................................................. 90
2. General Considerations About Ai And The Law........................ 90
3. Bayern’s Proposition On Using Llc To de Facto Personalizing Ai.93
4. From The Creation Of Brazilian Llc To The Civil Code Of 2002: a
Contractual View....................................................................... 98
5. Eireli And Single-member Companies: a Path To An Institutionalist
View......................................................................................... 102
6. A Dialogue With Bayern’s Proposition...................................... 107
Conclusion................................................................................ 112
References................................................................................ 114
Introduction.............................................................................. 217
1. Collaborative Commercial Contracts And Their Natural
Incompleteness......................................................................... 218
2. Ai’s Impacts On Contractual Incompleteness Causes................ 228
2.1. Possible Reduction Of Transaction Costs............................... 228
2.2. The Decrease In Information Asymmetry............................. 233
3. Impacts On The Solutions To Problems Generated By
Incompleteness: Contractual Revision........................................ 237
Conclusion................................................................................ 241
References................................................................................ 242
Introduction............................................................................. 247
1. Technology And Innovation In The Financial System............... 248
1.1. First Era: From Analog To (traditional) Digital (1866-1967)...... 251
1.2. Second Era: Development Of Traditional Digital Services (1967-
2008)......................................................................................... 252
1.3. Third Era: Modernization/democratization Of Digital Services
(2008-present)........................................................................... 253
2. Credit Fintechs In Brazil: Business Models And Technology...... 254
2.1. The Importance Of Data For Credit Fintechs (and Their Artificial
Intelligence Algorithms) To Compete With Incumbent Banks....... 257
3. The Role Of Regulation In Stimulating Competition Through
Fintech..................................................................................... 261
3.1. Addressing The Preliminary Issue: Does Innovative Competition
Harm Financial Stability?........................................................... 265
3.2. Why The Central Bank Of Brazil Should (continue To) Promote
Competition.............................................................................. 268
4. How The Central Bank Of Brazil Should (continue To) Promote
Competition.............................................................................. 272
4.1. Fostering Competition Through Financial Data Sharing Between
Lenders: Fuel For The Artificial Intelligence Behind Credit Fintechs
................................................................................................ 277
Conclusions.............................................................................. 283
References................................................................................ 284
Wallace Almeida de Freitas
Artificial Intelligence And Solvable Civil Responsibility
Introduction.............................................................................. 291
Artificial Intelligence And Machine Learning: Brief Consideration
................................................................................................ 293
Unpredictability and Emerging Behavior.................................... 296
Civil Responsibility.................................................................... 299
General Aspects And Classification And Assumptions.................. 299
Liability And Artificial Intelligence: Damages In Machine Learning
................................................................................................ 302
Responsible Civil Resolution...................................................... 304
Conclusion................................................................................ 307
References................................................................................ 308
Introduction.............................................................................. 310
1. Artificial Intelligence In Medicine........................................... 312
2. Legal Issues Related To The Use Of Artificial Intelligence In
Medicine................................................................................... 316
2.1. Diagnostic Imaging With Artificial Intelligence..................... 316
2.2. Privacy Violations................................................................ 319
Conclusion................................................................................ 325
References................................................................................ 326
Rômulo Soares Valentini
Electronic Lawsuit Systems And Machine-made Judgments:
Developing Standards For a “Legal Turing Test”
Introduction.............................................................................. 328
1. Machine-made Judgments And Electronic Lawsuit System....... 329
1.1. Why Develop Machine-made Judgment Algorithms?.............. 329
1.2. Key Aspects Of Developing Machine-made Judgment Algorithms
................................................................................................ 333
1.3. The Electronic Lawsuit System Pje And The Brazilian Experience
................................................................................................ 336
2. The “legal Turing Test”........................................................... 339
Conclusions.............................................................................. 339
References................................................................................ 339
COORDINATOR’S NOTE
3 Real is the Brazilian fiat currency. 1 million correspond to around 200 thousand
dollars.
public health issue aggravated by unemployment4 and economic
crisis5.
Since December 2020, when a vaccine for COVID-19 was
starting to be distributed around the world on a large scale, the new
expectation is that in 2021 the discussions about AI regulation in
Brazil will resume, hopefully evolving to a mature stage in which the
first national law about that subject is finally approved in Congress
or at least other regulatory initiatives become operational. It is in
this context of hope and expectation that we present this book to the
public.
In the following pages, the reader will find 13 texts about
many aspects of AI technology, not only in the legal field but also from
the perspective of other areas, such as ethics, philosophy, computer
sciences, medicine, civil law, business law, privacy and personal data
protection.
Antônio Martino opens the book with a historical introduction
about logic, the development of informatics and AI regulation.
Follows an article by Eduardo Magrani about the importance of ethics
in AI. Right after, Lourenço Araújo and Yuri Santos study the concept
of error in machine learning, from a computer science perspective.
Then comes the text by Henry Colombi and Natália Chaves, discussing
the pros and cons of attributing legal personhood to AI-based systems.
Next, we have two studies relating AI to privacy and personal data
protection, by Manuel Masseno and Bernardo Grossi. The former
focus on the EU context, while the latter focus on Brazil. Then comes
the article by David Hosni and Pedro Martins, dealing with the
Brazilian General Data Protection Law provisions about automated
decision-making and Júlia Ribeiro’s study about possible impacts of EU
4 According to official data, the unemployment rate in Brazil raised 27,6% from May
to August 2020: (AGÊNCIA BRASIL, 2020).
5 The International Monetary Fund worst prediction pointed out that the Brazilian
gross domestic product could shrink up to 10% in 2020, the worst scenario in this
century: (GERBELLI, 2020).
regulation of AI on technological innovation. Two following articles
deal with civil and business law aspects, by Guilherme Martins,
Thomaz Penna, Alexandre Alves and Lucas Silva, addressing the use of
AI in commercial collaboration contracts and credit markets. The next
paper is an interesting, bold and, as far as I know, innovative analysis
of “solvable civil responsibility”, written by Wallace Freitas. Coming
to an end, Eduardo Tomasevicius writes about AI-based systems in
medicine, balancing their pros and cons, as well as the risks involved.
Finally, Rômulo Valentini draws a critical analysis of the Brazilian
electronic lawsuit systems and how it could lead to “machine-made
judgments”.
The history of AI development has been far from a straight
upward line. On the contrary, right after the first scholarly inquiries
about this subject, in 19436, and the first scientific use of the expression
“artificial intelligence”, in 19567, what followed was a period of nothing
but rather modest developments, the reason why the following decades
have been called “AI winter”8, in allusion to the fact that in some
countries the vegetation does not grow or grows too slowly during the
winter. Only in the first decades of the 21st century did the scenario
change, with an impressive and fast development of AI technologies9.
Although due to different reasons, the COVID-19 pandemic
also slowed down the AI development in some areas, especially outside
medical care, since the world government and industry priorities
radically shifted to fight the pandemic and its devastating social and
6 RUSSELL, Stuart J.; NORVIG, Peter. Artificial Intelligence: A Modern Approach. 3rd
ed. New Jersey: Prentice-Hall, 2010. p. 16.
7 KAPLAN, Jerry. Artificial Intelligence: What everyone needs to know. Oxford: Oxford
University Press, 2016. p. 13.
8 RUSSELL, Stuart J.; NORVIG, Peter. Op. cit. p. 24.
9 MAYER-SCHÖNBERGER, Viktor; CUKIER, Kenneth. Big Data. 2. ed. Boston/New
York: Eamon Dolan/Houghton Mifflin Harcourt, 2014. p. 09; CALO, Ryan. Artificial
Intelligence Policy: A Primer and Roadmap. University of Washington Research Paper. p.
01-28. August 2017. p. 04.
economic consequences. However, just as what had happened after the
AI winter, it is expected (and hoped for) that in the upcoming years the
world will rapidly catch up with the pre-pandemics level of AI research
and investments, experiencing an unprecedented development in
this area, as predicted by the US Government in a press conference
about the 2016 White House report called “Preparing for the Future of
Artificial Intelligence”, when it was said that in a near-future AI could
“let a thousand flowers bloom”10.
Leonardo Parentoni
10 UNITED STATES OF AMÉRICA. Repor from the National Science and Technology
Council Committee on Technology for the Executive Office of the President. Preparing
for the Future of Artificial Intelligence, October 12th 2016
Institutional Support
Leonardo Parentoni
Ph.D. in Corporate Law at USP/Brazil. Master of Laws at
UFMG/Brazil. LL.M. in Civil Procedural Law at UnB/Brazil. Member of
Attorney General’s Office (Advocacia-Geral da União/AGU). Researcher
and Tenured Law Professor at UFMG and Full Professor at The
Brazilian Institute for Capital Markets – IBMEC/MG. Founder and
Coordinator of the research area on Law, Technology and Innovation
at UFMG Law School Ph.D. Program. Founder and Scientific Advisor
to the Research Center on Law, Technology and Innovation – DTIBR
(www.dtibr.com). Former Research Fellow at the University of Texas
in Austin and Visiting Professor at the Uruguay Data Protection
Authority. Key Technology Partner (KTP Program) at the University
of Technology Sydney. Team Mentor in Law Without Walls - LWOW.
Areas of Specialty: 1) Law, Technology and Innovation; 2) Corporate
Law; 3) Empirical Legal Studies – ELS.
Eduardo Magrani
PhD. and Vice President of the National Institute for Data
Protection in Brazil. Global Senior Fellow at the International
Cooperation Program of Konrad Adenauer Foundation (EIZ-Fellowship
für nachhaltige Entwicklung und internationale Zusammenarbeit
von Konrad-Adenauer-Stiftung/KAS). Associate Researcher at Latam
Digital Center in Mexico. Coordinator of the Institute for Internet and
Society of Rio de Janeiro (2017-2019). Professor of Law and Technology
and Intellectual Property at FGV Law School, IBMEC and PUC-Rio.
Senior Fellow at the Alexander von Humboldt Institute for Internet
and Society in Berlin (2017). Researcher and Project Leader at FGV in
the Center for Technology & Society (2010-2017). Author of the books
“The Internet of Things” (2018), “Among Data and Robots: Ethics and
Privacy in the Age of Artificial Intelligence” (2019), “Digital Rights: Latin
America and the Caribbean” (2017), “Present Horizon: Technology
and Society in Debate” (2018) and “Connected Democracy” (2014).
Associated Researcher at the Law Schools Global League and Member
of the Global Network of Internet & Society Research Centers. Ph.D. and
Master of Philosophy (M.Phil.) in Constitutional Law at PUC-Rio with
a thesis on Internet of Things and Artificial Intelligence through the
lenses of Privacy Protection and Ethics. Bachelor of Laws at PUC-Rio,
with academic exchange at the University of Coimbra and Université
Stendhal-Grenoble 3. Lawyer/Partner and Project Lead Researcher,
acting actively on Internet regulation, Digital Rights, Corporate Law
and Intellectual Property fields for more than ten years, Magrani was
also one of the developers of Brazil’s first comprehensive Internet
legislation: Brazil’s Internet Bill of Rights (“Marco Civil da Internet”).
Eduardo Tomasevicius Filho
Ph.D. in Civil Law at the University of São Paulo - USP, Brazil.
Associate Professor at the Department of Civil Law, - USP, Brazil.
Lawyer.
Henry Colombi
LLM Student in Private Law at UFMG Law School. Lawyer.
There have been several lines that have been drawing the
use of artificial intelligence in law, one of the most interesting born
in Georg H. von Wright and passes through the Institute of Legal
Documentation of the Italian CNR. Four years have passed and many
things have changed and brought new challenges. There is already the
electronic use of data and news both in the profession of lawyers and
the judicial system and there are very advanced programs in the use
of new technologies in law. Some doomsayers predict the dominion of
machines over men. They forget something constant and that cannot
change: man has weighting, machine does not.
13 All quantum physics has had a great theoretical development but little practical
because quantum systems are unstable because of the heat they produce. I was lucky
enough to talk to Carlos Bunge, Mario’s son and professor of quantum physics at
Unam. If there are errors, they are due to my defective understanding. An article in
the magazine Nature, one of the most prestigious scientific publications, gave an ac-
count of what could be a milestone in the field of quantum computing. The data said
that Google’s quantum processor, called Sycamore, completed a calculation operation
in 200 seconds, a run that would take the world’s fastest conventional computer about
10,000 years to complete. The result achieved with the experiment allowed the com-
puter giant’s researchers to claim quantum supremacy, a concept coined by Ameri-
can physicist John Preskill in 2012. The notion holds that quantum supremacy will be
achieved when a quantum system performs a computational task that exceeds those
that can be performed by a classical computer. It is curious but now what is in danger
is the process of block change because it rested on the tranquility that required a huge
amount of time to calculate the steps that were made to give certainty of scribes to
their calculations. With the quantum computer, goodbye certainty.
Heraclitus (around 500 BC), who saw everything as a product of a
process from which the progress of history emerges. As no situation
can continue indefinitely, and every situation contains elements that
conflict.
The change is continuous. The process of change is the
dialectic. In Marx, dialectics designates both the particular process
by which society develops throughout its own history, and how it must
think to adequately grasp that process.
Rhetoric is any orderly communicative process that has
persuasion as its goal. Rhetoric is the capacity to defend one’s own
opinion through public discourse, thus trying to influence the way of
thinking and acting of others, provoking an induced reflection in those
who listen to us, and thus building in the head of others the edifice we
want to carry out, so that they reach, in short, the conclusions we have
previously foreseen.
Rhetorical discourse must be elaborated following the best
rules of grammar, it can be on any subject, it must have a persuasive
character and its construction must be suasive. The close relationship
that rhetoric has with the law has been present since its birth and is
found in its very origins. It should never be lost sight of the fact that
ancient rhetoric is born of practical needs, especially those relating to
the resolution of conflicts closely connected with law and politics
In other words, legal reasoning generally tends to convince,
that is to say, it makes use of rhetoric, which is easily forgotten when
artificial intelligence is applied to law.
A lot has happened in these ten years precisely because many
more people are engaged in research and experimentation in this field
and, as is fashionable, there is much more money available. Prepared
people and money is what can turn a subject into something of
enormous expectation. In this case, we must add that there is already
enough experience to be able to tackle new and important issues, both
from a theoretical and practical point of view.
In the theoretical part regarding systems that aims to use A.I.
in legal reasoning: there is a lot on the table, but I want to point out the
essential fields and notable works in those different areas:14
a) Criminal liability for acts committed by artificial intelligence
systems: the context for the analysis of the common elements and
the differences between the criminal activities of human beings and
artificial systems. It is being handled by Giovanni Sartor, one of the
best researchers in the field.
b) Electronic processes: Justice will be radically different all
over the world. It is already beginning to be used in different judicial
organizations around the world. It is enough that the notifications
are electronic, as well as the demand and the answer that offer all the
evidence in one single filmed hearing, which may last several days if
the trial is very big and there is a lot of evidence, then the pleadings
and the sentence help a lot that also other parts of the public function
and even the relations between private people are already electronic,
that is why we look very carefully at Estonia that has not only started to
use this kind of system15 but has promised an “automatic judge”.
c) General legal reasoning software.: Notably, IBM Watson,
who uses natural language and learns, and who, according to the
American Barr Association, has already replaced the interns in the
large law firms16. Also, ROSS Intelligence can listen to human language,
14 J:E:E:P. stands for “just enough essential pieces”. If we could use that when we
write, talk or do things in life, we might be famous as the vehicle that revolutionized
military transport in World War II and is still being sold.
15 In Italy, where the system has been in place for some years, causes that used to last
10 years now barely exceed 10 months. It is good to remember that we are human
beings and that no instrument (and the A.I. is an instrument) is going to solve all our
problems. The use of electronic systems in Justice creates other difficulties that must
also be faced.
16 To the expert systems programs that we developed in the past years have entered
new products extremely Watson is a computer system of artificial intelligence that is
capable of answering questions formulated in natural language. Watson uses all the
innovations in data analysis and management of, either by connecting to databases
or encyclopedias stored on hard disks, or to the Internet, with the almost unlimited
sources that this implies. Watson’s function is, precisely for this reason, to access, se-
track over 10 thousand pages per second and formulate a response
much faster than any human lawyer.
Do not forget that technologies change very fast. In his book
“Six memos for the next millennium”, Italo Calvino develops the six
characteristics of this century: lightness, speed, accuracy, visibility,
multiplicity, concreteness17. Without forgetting the others, notice
today the speed for all changes.
lect and process the most appropriate information to what the situation or interaction
requires. IBM has put on the market Watson which is an A.I. system that works with
natural language and can learn. We tried to use it for SRL but it was a failure. Until we
met Alain Colmerauer, one of the creators, at a congress. We told him our discontent
and laughingly he told us that he did not have a deduction rule but those who market-
ed the product put it that way and it could not be changed now. As he has the capacity
to learn, it is possible to prepare Watson for any profession, for example as a lawyer
or a judge, and in fact several American firms have - according to data from the Bar
Association - already done so and have “electronic lawyers, nothing brilliant but who
work 24 hours a day and don’t look at their cell phones.
17 Calvino, Italo, Six Memos for the Next Millennium, Harvard University Press,
Cambridge MA, 1988.
For example, Deep Blue exploits in beating Kasparov at chess
are well known, but now a new AlphaZero system has appeared, which
did not learn from any chess player but dedicated itself to learning the
rules and ways of playing on its own. The results show that a general-
purpose, booster-learning algorithm can learn from scratch and
achieve superhuman performance in several highly complex games.
Some sectors have developed more than others, in this
field and from a commercial point of view of greater arrival are the
insurance and banking. Philips, the Dutch giant, which operates in the
health sector is the first example of a company that can create a new
A.I.-based ecosystem business. Aon Benfield, the health sector think
tank, has taken the programs to the next level and success is derived
from the ability to give meaning to complex issues. Philips, the Dutch
health colossus, is the first example of a global society able to create a
new AI-based ecosystem business by connecting
Aon Benfield, the reinsurance company, has developed an
AI platform in England that takes advantage of cloud technology to
manage one of the most complex pension products with integrated
financial guarantees: variable annuities.
But at the center of the field of interest of A.I. applied to legal
reasoning is IBM Project Debater, the first artificial intelligence system
that can debate complex issues.
Great public debates have fired our imagination since the
days of ancient Greece. This intellectual tradition came to life at
the IBM Think conference in San Francisco, when IBM Research
and Intelligence Squared U.S. held a live public debate on Monday,
February 11, between a human being and an AI.
Project Debater is the first artificial intelligence system that
can discuss complex issues with humans, using a knowledge base
that consists of about 10 billion sentences, taken from newspapers
and magazines. Project Debater digests massive texts, constructs a
well-structured discourse on a given topic, delivers it with clarity
and purpose, and refutes its opponent. Eventually, Project Debater
will help people to reason by providing convincing, evidence-based
arguments and limiting the influence of emotion, bias or ambiguity
To do this effectively, the system must gather relevant facts
and opinions, form them into structured arguments, and then use
precise language clearly and persuasively.
In development since 2012, Project Debater is IBM’s next
major AI milestone, following on from previous advances such as
Deep Blue (1996/1997) and Watson on Jeopardy!
Great public debates have fired our imagination since the
days of ancient Greece. This intellectual tradition came to life at
the IBM Think conference in San Francisco, when IBM Research
and Intelligence Squared U.S. held a live public debate on Monday,
February 11, between a human being and an AI.
Harish Natarajan, Project Debater’s opponent in Think 2019,
is a 2016 World Debating Championship Grand Finalist and 2012
European Debating Champion. Harish was declared the winner of a
debate on “We must subsidize preschool”. Both sides delivered a four-
minute opening statement, a four-minute rebuttal and a two-minute
summary.
The winner of the event was determined by the ability of the
debate to convince the audience of the persuasiveness of the arguments.
The results were tabulated through a real-time online survey. Before
the debate, 79 percent of the audience agreed that preschools should
be subsidized, while 13 percent disagreed (8 percent were undecided).
After the debate, 62 percent of the survey participants agreed that
preschools should be subsidized, while 30 percent disagreed, meaning
that Natarajan was declared the winner. Interestingly, 58 percent
said Project Debater enriched their knowledge of the subject matter,
compared to 20 percent for Harish.
In a live debate, Project Debater discusses a topic that has
never been trained in a very short sentence describing the motion.
The first step is to construct a keynote speech to defend or oppose this
motion. Project Debater looks for short pieces of text in mass bodies
that can serve this purpose. This requires a deep understanding of
human language and its infinite nuances and very precise position
identification, something that is not always easy for humans and is
certainly very for computers.
This process can result in a few hundred relevant text
segments. To debate effectively, the system needs to build the strongest
and most diverse arguments to support your case. Project Debater
does this by eliminating redundant argumentative text, selecting the
strongest remaining claims and evidence, and organizing them by
topic, creating the basis of the narrative to support or challenge the
motion.
It also uses a knowledge graph that allows it to find arguments
to support the general human dilemmas that arise in the subject
matter of the debate, for example, when it is right for the government
to coerce its citizens by infringing on their freedom of choice.
Project Debater brings together all the selected arguments to
create a persuasive discourse that lasts approximately four minutes.
This process only takes a few minutes. Then, you are ready to deliver
your keynote address.
The next step is to listen to your opponent’s response, digest
it and build your rebuttal. Generating a good rebuttal is the most
challenging part of the debate for both humans and machines. Project
Debater applies many techniques, including those for anticipating and
identifying the opponent’s arguments. It then aims to respond with
claims and evidence that counter these arguments.
While the format and challenge of the debate has allowed us
to shape Project Debater’s capabilities, a future for technology beyond
the podium is envisioned. It could be used, for example, to promote
more civil debates in online commentary forums or by a lawyer
preparing for a trial where he could review legal precedents and test
the strengths and weaknesses of a case using a mock legal debate. In the
financial services industry, Debater could identify financial facts that
support or undermine an investment strategy. Or it could be applied
as a voice interaction layer to various complex customer experiences,
or even to improve the critical thinking and critical writing skills of
young people.
Debate is all about language. Mastering human language is
one of the most ambitious goals of AI. Project Debater takes us one step
closer on this journey. In the grand scheme, it reflects IBM Research’s
mission to develop a broad AI that learns across different disciplines
to a human judgment intelligence. It absorbs large sets of information
and perspectives in pursuit of a simple goal: to help us make better
and more informed decisions.
There are many and very important ones but, for reasons of
space, I will only deal with two: Mirel because it is the most ambitious
international project financed by the European Union and Promethea
because it is an Argentinean production, more modest, but destined
for massive use.
Not only are there products in operation but research is being
continued at the highest level. Of the many experiences, there are two:
MIREL - MIning and REasoning with Legal texts, which is a European
Union research The MIREL project will create an international and
intersectoral network to define a formal framework and develop tools
for MIning and REasoning with Legal texts, to translate these legal
texts into formal representations that can be used for consultation
of standards, verification of compliance and support for decision
making. The development of the MIREL framework and tools will
be guided by the needs of three industry partners and validated by
industry case studies.
MIREL promotes mobility and exchange of personnel between
SMEs and academia to create an intercontinental environment.
Interdisciplinary consortium in the areas of Law and Artificial
Intelligence, including Natural Language Processing, Computer
Ontologies, Argumentation, and Logic and Reasoning. It addresses
both conceptual challenges, such as the role of legal interpretation
in mining and reasoning, and computational challenges, such as
the handling of large legal data, and the complexity of regulatory
compliance. It bridges the gap between the community working on
legal ontologies and NLP analysts and the community working on
methods of reasoning and formal logic. It is also the first project
of its kind to involve industry partners in the future development
of innovative products and services in legal reasoning and market
deployment.
Promethea is a multidisciplinary team from the Public
Prosecutor’s Office of the City of Buenos Aires (CABA) believes that it
is. To achieve this, together with specialists in artificial intelligence
they developed Promethea, a system designed to predict the solution
of simple legal cases.
The team that created Promethea is led by two Buenos
Aires justice officials: Juan Corvalán - deputy attorney general for
administrative and tax matters in the Public Prosecutor’s Office -
and Luis Cevasco - deputy attorney general in charge of the General
Prosecutor’s Office.
The system was tested with 161 files on topics considered
feasible to deal with this development. Among them, procedural
issues, expiration, public employment and the right to housing,
in which it showed a 98% efficiency. It is not the idea of this type of
program to replace judicial officials and lawyers. It is essential that
behind Promethea there is always a person of flesh and blood who,
with his or her natural, and not artificial, intelligence, defines whether
the system’s proposal is adequate or not.
6. Some Comments About The Fear That Artificial
Intelligence, Overcoming The Natural One, Dominates Our
Actions.
7. In Conclusion
The best way to catch the technology train is not to chase it,
but to be at the next station. In other words, we need to anticipate
and direct the ethical development of technological innovation. And
we can do this by looking at what is feasible, privileging, within this,
what is ambitious Then what is socially acceptable and then, ideally,
choosing what is socially preferable compatible with the sustainability
of the biosphere, so our current equation is incomplete.
We have shown that decision making is increasingly aided
by digital programs and that they can often do so directly if there is
sufficient control before, during and after the decision.
We saw that law was always close to formalized formulations
from Roman epigraphs to the theory of argumentation to the profuse
normalizing activity in law of the last forty years.
Logic was an extraordinary instrument in this legal passage
to the formal and we believe we have demonstrated that logic is purely
syntactic like computer programs and this explains the capacity and
speed of legal decision making
We are living in an era of extraordinary growth in information
and its dissemination. The expression Big Data has a specific meaning
even if it is not transparent to everyone. And here appears the second
characteristic of our time: just as after Plato and much more after
Gutenberg, the great theme was to make the population literate, today
we have a similar problem with the lack of knowledge of an important
part of the population for the use of the computer media with which
almost all jobs and social services are being transformed through
e-government.
Curiously enough, we are living in a hinge time in which the
three cultures coexist on Planet Earth: oral, written and cybernetic.
But the latter has a speed of development and a force of
expansion that does not allow for the long times of literacy. This is a
demanding world, now! And the law and many functions of the state
cannot wait for “reasonable” times of knowledge. Those who are left
behind will be the lumpen of the near future: not 2100, in 2050!
And, last but not least, this all comes together with ethical
problems that we can’t ignore.
Obviously, we have to deal with ethical problems, or they
will come to us later. Let’s stop arguing uselessly if the machines are
going to govern man, a subject for unemployed philosophers, and let’s
deal with concrete and very close issues. Of course, any program,
especially if it can learn and has concrete directives of its purpose
will tend to achieve it more and more. But if it is a software or as the
popular imagination wants a robot, the issue is that to achieve it has
no other limits that we put it. Otherwise - since it cannot have an ethic
because it is not conscious - it will get it anyway.
In the long term, people (as users, consumers, citizens,
patients, etc.) are limited in what they can or cannot do by organizations,
for example, companies, which are limited by law, but the latter is
formed and limited by ethics, which is where people decide what kind
of society they want to live in.
New Perspectives On Ethics And The Laws Of Artificial
Intelligence
Eduardo Magrani
PhD. and Vice President of the National Institute for Data
Protection in Brazil. Global Senior Fellow at the International Cooperation
Program of Konrad Adenauer Foundation (EIZ-Fellowship für nachhaltige
Entwicklung und internationale Zusammenarbeit von Konrad-Adenauer-
Stiftung/KAS).
Associate Researcher at Latam Digital Center in Mexico.
Introduction18
21 The 2005 UN Robotics Report defines a robot as a semi or fully autonomous re-
programmable machine used for the well-being of human beings in manufacturing
operations or services.
public spheres, it becomes increasingly important to design a type of
functional morality that is sensitive to ethically relevant characteristics
and applicable to intended situations (Verbeek, 2011).
A good example is Microsoft’s robot Tay, which helps to
illustrate the effects that a non-human element can have on society.
In 2016, Microsoft launched an artificial intelligence program
named Tay. Endowed with a deep learning22 ability, the robot shaped
its worldview based on online interactions with other people and
producing authentic expressions based on them. The experience,
however, proved to be disastrous and the company had to deactivate
the tool in less than 24 hours due to the production of worrying results.
The goal was to get Tay to interact with human users on
Twitter, learning human patterns of conversation. It turns out that
in less than a day, the chatbot was generating utterly inappropriate
comments, including racist, sexist and anti-Semitic publications.
In 2015, a similar case occurred with “Google Photos”. This
was a program that also learned from users to tag photos automatically.
However, their results were also outright discriminatory, and it was
noticed, for example, that the bot was labeling colored people as
gorillas.
The implementation of programs capable of learning and
adapting to perform functions that relate to people creates new ethical
and regulatory challenges, since it increases the possibility of obtaining
results other than those intended, or even totally unexpected ones.
In addition, these results can cause harm to other actors, such as the
discriminatory offenses generated by Tay and Google Photos.
Particularly, the use of artificial intelligence tools that interact
through social media requires reflection on the ethical requirements
22 “Deep learning is a subset of machine learning in which the tasks are broken down
and distributed onto machine learning algorithms that are organized in consecutive
layers. Each layer builds up on the output from the previous layer. Together the layers
constitute an artificial neural network that mimics the distributed approach to prob-
lem-solving carried out by neurons in a human brain.” Available at: http://webfounda-
tion.org/docs/2017/07/AI_Report_WF.pdf.
that must accompany the development of this type of technology. This
is because, as previously argued, these mechanisms also act as agents
in society, and end up influencing the environment around them, even
though they are non-human elements. It is not, therefore, a matter of
thinking only about the “use” and “repair” of new technologies, but
mainly about the proper ethical orientation for their development
(Wolf et al., 2017).
Microsoft argued that Tay’s malfunctioning was the result
of an attack by users who exploited a vulnerability in their program.
However, for Wolf et al., this does not exempt them from the
responsibility of considering the occurrence of possible harmful
consequences with the use of this type of software. For the authors,
the fact that the creators did not expect this outcome is part of the very
unpredictable nature of this type of system (Wolf et al., 2017).
The attempt to make artificial intelligence systems
increasingly adaptable and capable of acting in a human-like manner,
makes them present less predictable behaviors. Thus, they begin to act
not only as tools that perform pre-established functions in the various
fields in which they are employed, but also to develop a proper way
of acting. They impact the world in a way that is less determinable or
controllable by human agents. It is worth emphasizing that algorithms
can adjust to give rise to new algorithms and new ways to accomplish
their tasks (Domingos, 2015), so that the way the result was achieved
would be difficult to explain even to the programmers who created the
algorithm (Doneda and Almeida, 2016).
Also, the more adaptable the artificial intelligence programs
become, the more unpredictable are their actions, bringing new risks.
This makes it necessary for developers of this type of program to be
more aware of the ethical and legal responsibilities involved in this
activity.
The Code of Ethics of the Association for Computing
Machinery (Wolf et al., 2017) indicates that professionals in the field,
regardless of prior legal regulation, should develop “comprehensive
and thorough assessments of computer systems and their impacts,
including the analysis of possible risks”.
In addition, there is a need for dedicated monitoring to verify
the actions taken by such a program, especially in the early stages of
its implementation. In the Tay case, for instance, developers should
have monitored the behavior of the bot intensely within the first 24
hours of its launch, which is not known to have occurred (Wolf et al.,
2017). The logic should be to prevent possible damages and to monitor
in advance, rather than the remediation of losses, especially when
they may be unforeseeable.
To limit the possibilities of negative consequences,
software developers must recognize those potentially dangerous
and unpredictable programs and restrict their possibilities of
interaction with the public until it is intensively tested in a controlled
environment. After this stage, consumers should be informed about
the vulnerabilities of a program that is essentially unpredictable, and
the possible consequences of unexpected behavior (Wolf et al., 2017).
The use of technology, with an emphasis on artificial
intelligence, can cause unpredictable and uncontrollable
consequences, so that often the only solution is to deactivate the
system. Therefore, the increase in autonomy and complexity of the
technical artifacts is evident, given that they are endowed with an
increased agency, and are capable of influencing others but also of
being influenced in the sociotechnical system in a significant way,
often composing even more autonomous and unpredictable networks.
Although there is no artificial intelligence system yet that is
completely autonomous, with the pace of technological development,
it is possible to create machines that will have the ability to make
decisions in an increasingly autonomous way, which raises questions
about who would be responsible for the result of its actions and
eventual damages caused to others (Vladeck, 2014). According to the
report released at the World Economic Forum in 2017: The greatest
threat to humanity lies in delegating authority and decisions to machines
that do not have the intelligence to make (Cerka, 2015).
Made out Criteria im- Machine only: Robot’s pro- Legal (stan-
of finite set plemented in deterministic ducer dards, national
of options, a legal frame- algorithms/ or internation-
according to work robots al legislation)
preset strict
criteria
23 The engineers are responsible for thinking about the values that will go into the
design of the artifacts, their function and their use manual. What escapes from the
design and use manual does not depend on the control and influence of the engineer
and can be unpredictable. That’s why engineers must design value-sensitive technical
artifacts. An artifact sensitive to constitutionally guaranteed values (deliberate in the
public sphere) is a liable artifact. It also necessary to think about the concepts of “in-
clusive engineering and “explainable AI”, to guarantee non-discrimination and trans-
parency as basic principles for the development of these new technologies.
24 With this regard, to enhance the transparency and the possibility of accountabil-
ity in this techno-regulated context, there is nowadays a growing movement in civil
society demanding the development of “explainable artificial intelligences”. Also, the
debate around a “right to explanation” for algorithmic and autonomous decisions that
took place on discussions around the General Data Protection Regulation (GDPR) is
also a way to achieve the goals of transparency and accountability since algorithms
are taking more critical decisions on our behalf and is increasingly hard to explain
and understand its processes.
25 ‘Causal nexus’ is the link between the agent’s conduct and the result produced by it.
“Examining the causal nexus determines what were the conducts, be them positive or
negative, gave rise to the result provided by law. Thus, to say that someone has caused
a certain fact, it is necessary to establish a connection between the conduct and the
result generated, that is, to verify if the action or omission stemmed from the result
According to the legal framework we have today, this can lead
to a situation of “distributed irresponsibility” (the name attributed in
the present work to refer to the possible effect resulting from the lack
of identification of the causal nexus between the agent’s conduct and
the damage caused) among the different actors involved in the process.
This will occur mainly when the damage transpires within a complex
socio-technical system, in which the liability of the intelligent thing
itself, or a natural or legal person, will not be obvious.26
caused.
26 This legal phenomenon is also called by other authors as “problem of the many
hands” or “accountability gap”.
“(i) Human agency and oversight: A.I. systems should
empower human beings, allowing them to make informed decisions
and fostering their fundamental rights. At the same time, proper
oversight mechanisms need to be ensured, which can be achieved
through human-in-the-loop, human-on-the-loop, and human-in-
command approaches;
(ii) Technical robustness and safety: A.I. systems need to be
resilient and secure. They need to be safe, ensuring a fallback plan in
case something goes wrong, as well as being accurate, reliable and
reproducible. That is the only way to ensure that also unintentional
harm can be minimized and prevented;
(iii) Privacy and data governance: besides ensuring full
respect for privacy and data protection, adequate data governance
mechanisms must also be ensured, taking into account the quality and
integrity of the data, and ensuring legitimized access to data;
(iv) Transparency: the data, system and A.I. business models
should be transparent. Traceability mechanisms can help to achieve
this. Moreover, A.I. systems and their decisions should be explained
in a manner adapted to the stakeholder concerned. Humans need to
be aware that they are interacting with an A.I. system, and must be
informed of the system’s capabilities and limitations;
(v) Diversity, non-discrimination and fairness: unfair bias
must be avoided, as it could have multiple negative implications,
from the marginalization of vulnerable groups, to the exacerbation of
prejudice and discrimination. Fostering diversity, A.I. systems should
be accessible to all, regardless of any disability, and involve relevant
stakeholders throughout their entire life circle;
(vi) Societal and environmental well-being: A.I. systems
should benefit all human beings, including future generations. It
must hence be ensured that they are sustainable and environmentally
friendly. Moreover, they should take into account the environment,
including other living beings, and their social and societal impact
should be carefully considered;
(vii) Accountability: mechanisms should be put in place to
ensure responsibility and accountability for A.I. systems and their
outcomes. Auditability, which enables the assessment of algorithms,
data and design processes plays a key role therein, especially in critical
applications. Moreover, adequate and accessible redress should be
ensured.”
Similar to this well-grounded initiative, many countries,
companies and professional communities are publishing guidelines
for A.I., with analogous values and principles, intending to ensure the
positive aspects and diminish the risks involved in A.I. development. In
that sense, it is worth mentioning the recent and important initiatives
coming from:
(i) Future of Life Institute – Asilomar AI;
(ii) Berkman Klein Center;
(iii) Institute Electrical and Electronic Engineers IEEE;
(iv) Centre for the study on existential risks;
(v) K&L gates endowment for ethics;
(vi) Center for human-compatible AI;
(vii) Machine Intelligence Research Institute;
(viii) USC center for AI in society;
(ix) Leverhulme center for future of intelligence;
(x) Partnership on AI;
(xi) Future of Humanity Institute;
(xii) AI Austin;
(xiii) Open AI;
(xiv) Foundation for Responsible Robotics;
(xv) Data & Society (New York, US);
(xvi) World Economic Forum’s Council on the Future of AI
and Robotics;
(xvii) AI Now Initiative;
(xviii) AI100.
Besides the great advancements on ethical guidelines
designed by the initiatives hereinabove, containing analogous values
and principles, one of the most complex discussions that pervades the
various guidelines that are being elaborated, is related to the question
of A.I.’s autonomy.
The different degrees of autonomy allotted to the machines
must be thought of, determining what degree of autonomy is reasonable
and where substantial human control should be maintained. The
different levels of intelligence and autonomy that certain technical
artifacts may have must directly influence the ethical and legal
considerations about them.
27 The type of insurance that should be applied to the case of intelligent robots and
which agents and institutions should bear this burden is still an open question. The
European Union’s recent report (2015/2103 (INL)) issued recommendations on the
subject, proposing not only mandatory registration, but also the creation of insurance
and funds. According to the European Parliament, insurance could be taken by both
the consumer and the company in a similar model to those used by the car insurance.
The fund could be either general (for all autonomous robots) or individual (for each
category of robot), composed of fees paid at the time of placing the machine on the
Regarding the legal status that could be given to these agents,
the resolution uses the expression “electronic person” or “e-person”. In
addition, in view of the discrepancy between ethics and technology,
the European proposition rightly states that dignity, in a deontological
bias, must be at the centre of a new digital ethics.
The attribution of a legal status to intelligent robots, as
designed in the resolution, it is intended to be one possible solution
to the legal challenges that will arise with the gain of autonomy
of intelligent Things. The European Parliament’s report defines
“intelligent robots” as those whose autonomy is established by their
interconnectivity with the environment and their ability to modify
their actions according to changes.
With the purpose of building upon this discussion, the Israeli
researcher Karni Chagal performs the analysis on robot autonomy to
help us differentiate the potential of responsibility in each case. To
Chagal, to resolve the liability issue, it is crucial to think on different
levels of robot’s autonomy (Chagal, 2018). Nevertheless, she is aware
that given the complexity of the artificial intelligence systems, the
classification is difficult to implement, since the autonomy is not a
binary classification.
Two possible metrics raised for assessing autonomy are
the freedom of action of the machine for the human being and the
capacity of the machine to replace human action. Such metrics are
branched and complex with several possible sub-analyses and,
according to Chagal, these tests should also consider the specific stage
of the machine decision-making process (Chagal, 2018).
To illustrate, Chagal designed the following table (hereunder),
with a metric showing the possibility for machines to substitute
market, and / or contributions paid periodically throughout the life of the robots. It is
worth mentioning that, in this case, companies would be responsible for bearing this
burden. Despite this proposal, however, the topic continues open to debate, with new
alternatives and more interesting models - such as private funds, specific records,
among other possibilities - that will not be the subject of a deep analysis in this thesis.
humans in complex tasks and analyzing also the decision-making
capacity of the machine (Chagal, 2018). The more machines get closer
to a “robot-doctor” stage, the more reasonable it would be to attribute
new forms of accountability, liability, rights or even an electronic
personhood.
29 Parts of this subsection were built upon a recent and unpublished work of the au-
thor, in co-authorship (Magrani, Viola, and Silva, 2019), and cited here to bring an
updated vision of the author in dialogue with other recent publications.
not be applicable. That would arise the need to assign rights and
even eventually even a specific personality to smart robots with high
autonomy level, besides the possibility of creating insurance and
funds for accidents and damages involving robots.
Because we are not yet close to a context of substantial or full
robotic autonomy, such as a ‘strong AI’ or ‘general artificial intelligence’,
there is a strong movement against the attribution of legal status to
them. Recently, over 150 experts in A.I., robotics, commerce, law,
and ethics from 14 countries have signed an open letter denouncing
the European Parliament’s proposal to grant personhood status to
intelligent machines.30 The open letter suggests that current robots
do not have moral standing and should not be considered capable of
having rights.
However, as computational intelligence can grow
exponentially, we should deeply consider the possibility of robots
gaining substantial autonomy over the next years, demanding a real
need for the attribution of rights.
Considering the myriad of possibilities, the Italian professor
and researcher Ugo Pagallo states (Pagallo, 2018):
30 The characteristics most used for the foundation of the human personality are:
consciousness; rationality; autonomy (self-motivated activity); the capacity to com-
municate; and self-awareness. Another possible social criterion is to be considered
a person whenever society recognizes thus recognises one (we can even apply the
Habermasian theory here, through a deliberative process in the public sphere). Other
theorists believe that the fundamental characteristic for the attribution of personality
is sensibility, which means the capacity to feel pleasure and pain. The legal concept
of a person is changeable and is constantly evolving. For example, Afro-descendants
have once been excluded from this category, at the time of slavery. Therefore, one
cannot relate the legal concept of a person to Homo sapiens. A reservation is neces-
sary at this point because even if robots can feel and demonstrate emotions as if they
were sensuous, the authenticity of these reactions is questioned since they would not
be genuine, but at most a representation (or emulation), analogous to human actors
when they simulate these emotions in a play, for example, feelings in certain roles, not
being considered by many as something genuine. Because of this, the Italian jus-phi-
losopher Ugo Pagallo calls this ‘artificial autonomy’.
Policy makers shall seriously mull over the possibility of
establishing novel forms of accountability and liability for
the activities of AI robots in contracts and business law,
e.g., new forms of legal agenthood in cases of complex
distributed responsibility. Second, any hypothesis
of granting AI robots full legal personhood has to be
discarded in the foreseeable future. (...) However, the
normative reasons why legal systems grant human and
artificial entities, such as corporations, their status, help
us taking sides in today’s quest for the legal personhood
of AI robots.
31 In the present article, it is argued that the consensus must be constructed accord-
ing to Jurgen Habermas’s proposal, that is, through dialectical conflicts in the public
sphere.
intelligence systems with the capacity of reasoning and learning
according to deep learning techniques in artificial neural networks
(Amaral, 2015).
In view of the increasing risks posed by the advance of
techno-regulation, amplified by the dissemination of the ‘Internet of
Things’ and artificial intelligence, the rule of law should be seen as
the premise for technological development, or as a meta technology,
which should guide the way technology shapes behavior rather than
the other way around - which often results in a violation of human and
fundamental rights.
For law to act properly as a meta technology, it must be backed
by ethical guidelines consistent with the age of hyperconnectivity. In
this sense, it is necessary to understand the capacity of influence of the
non-human agents, aiming to achieve a better regulation, especially
for more autonomous technologies, thinking about preserving the
fundamental rights of individuals and preserving the human species.
The law, backed by an adequate ethical foundation, will serve
as a channel for data processing and other technological materialities
avoiding a techno-regulation harmful to humanity. In this new role,
it is important that the law guides the production and development
of Things (technical artifacts) in order to be sensitive to values, for
example, regulating privacy, security and ethics by design. In a
metaphor, law as meta technology would function as a pipeline suited
to the digital age, through which all content and actions would pass.
With technology moving from a simple tool to an influencing
agent and decision-maker, law must rebuild itself in the techno-
regulated world, incorporating these new elements from a meta-
perspective (as a meta-technology), building the normative basis to
regulate the ethics of new technologies through design. To do so, we
must enhance and foster human-centered design models that are
sensitive to constitutional values (value-sensitive design).
Governing A.I. with the mentioned ethical principles
(fairness; reliability; security; privacy; data protection; inclusiveness;
transparency; and accountability) and the “by design” technique is an
important step to try to follow the pace of technological innovation, at
the same time as trying to guarantee the effectiveness of the law.
Conclusion
References
Abstract
Error is a key element in machine learning, as every
modern widely used machine learning algorithm performs some
task to minimize - and not to nullify - a loss function, which basically
measures the difference between the output of the predictive model
and the function it tries to approximate. In that sense, machine
learning algorithms, as they are today, will always make mistakes,
even with large well-constructed datasets. This limitation, by itself,
poses a challenge for lawmakers and jurists, as it may influence
regulatory efforts or the possible usage of automated systems in real-
life applications that may impact the legal field.
Keywords
Machine learning. Statistical learning theory. Law.
Introduction
3. Discussion
33 For a very good example in this matter, refer to WACHTER et al. (2018). Their text
provides an interesting discussion on the possibility of using counterfactual explana-
tions in the context of the European General Data Protection Regulation (GDPR).
algorithms work. When it comes to models that somehow use
information relating to human beings, training is always done with
data about specific persons, i.e., to a group of people whose data is
present in the dataset. However, such models may be used to make
individual decisions that impact persons that are not part of that
specific group. In that sense, it is possible to ask whether (and in which
cases) certain patterns and correlations identified by an algorithm
in a dataset restricted to a group of people should be used to make
individual decisions for others.
As a matter of fact, machine learning techniques have many
other limitations (MALIK, 2020) that are not addressed by this article
and that should also be taken into account by anyone who intends to
use, study or regulate them.
4. Conclusion
References
Henry Colombi
LLM Student in Private Law at UFMG Law School, Lawyer
Abstract
This article is focused on the impact of Artificial Intelligence
(AI) on the legal sector. The first part is dedicated to making general
points about AI. To face the new challenges, in Europe, the discussions
are moving in the direction of granting legal personhood to AI. In the
second part, Prof. Shawn Bayern’s theory emerges as an alternative.
According to him, it is possible to use an American LLC to encapsulate
autonomous systems, letting them act juridically through this legal
entity. In the sequence, Bayern’s proposition is analyzed from the
perspective of Brazilian LLC law, showing that the pace of technological
innovations will soon lead to similar legal structures in Brazil.
Keywords
Corporate Law; Artificial Intelligence; Legal Personhood.
Introduction
45 It is important to clarify that Prof. Bayern’s strategy was based on the provisions of
RULLCA and not on any American State’s legislation.
46 UNIFORM LAW COMMISSION. Prefatory Note to ULLCA (2006). Chicago, 2006.
47 PARENTONI, Leonardo Netto; GONTIJO, Bruno Miranda. Competência legislativa
em Direito Societário: Sistemas brasileiro, norte-americano e comunitário europeu.
Revista de Informação Legislativa, Brasília, y. 53, n. 210, p. 239-265, apr./jun. 2016. p.
245.
regulates in a uniform manner the terms of Brazilian LLC, this field
is occupied by States, which have their own laws. Concerning the
RULLCA, till recently it was internalized only by 19 of the 50 US States.
Despite this fact, the advantage of the LLC models points out to its
fast expansion to the remaining States48. That’s why, with regards to
comparative law, using RULLCA as a standard for LLC regulations in
the US is more accurate than using a specific law from an American
State.
Essentially there are two documents related to LLC formation.
The first one is the certificate of organization, or the articles of
organization. It is the filling document for an LLC. It includes the
mandatory clauses for an LLC according to the applicable State’s law.
The second one is the operating agreement, which is the “foundational
contract among the entity’s owners”49, a creature of contract. By this
model, the company’s members are free to customize the LLC’s terms.
More than that, the flexibility of the operating agreement in RULLCA
made it even possible for the members to determine that all the
functioning features of an LLC could be undertaken by an AI. At least,
it is Prof. Bayern’s understanding.
The author himself recognizes that, in a first moment, the AI
needs a human being or a recognized legal person to file the papers
and draw the operating agreement for the LLC. But from that moment
on, since the LLC is legally constituted, the founding member could
step out of the company and by translating the provisions of the
operating agreement into an algorithmic language, the AI could keep
operating autonomously.
Bayern sustains that an AI could be “encapsulated” by an LLC
under US uniform law according to the following steps:
50 BAYERN, Shawn. The Implications of Modern Business-Entity Law for the Regu-
lation of Autonomous Systems. Stanford Technology Law Review. Palo Alto, iss. 19, p.
93-112, 2015. p. 101.
51 BAYERN, Shawn. The Implications of Modern Business-Entity Law for the Regu-
lation of Autonomous Systems. Stanford Technology Law Review. Palo Alto, iss. 19, p.
93-112, 2015. p. 101.
52 SECTION 701. EVENTS CAUSING DISSOLUTION. “(a) A limited liability company is
dissolved, and its activities and affairs must be wound up, upon the occurrence of any of
the following: […] (3) the passage of 90 consecutive days during which the company has no
members unless before the end of the period: (A) consent to admit at least one specified person
as a member is given by transferees owning the rights to receive a majority of distributions
as transferees at the time the consent is to be effective; and (B) at least one person becomes
a member in accordance with the consent; […]”. UNIFORM LAW COMMISSION. Uniform
Limited Liability Company Act (2006). Chicago, 2006.
The advantage of Bayern’s proposition is that it does not
demand any legal reform in the perspective of American corporate
law. As it is known, the legislative process comprises several steps
and is often seen to be moving at a slow pace, and the results are not
always effective. Moreover, the recognition of e-personalities by a
statute, such as it seems to be the next move in European directives,
necessarily makes things more complex than they could be. The
entitlement of personhood means not only providing possibilities
of owning property, making agreements and being responsible for
its own acts but also raises ethical issues, matters of fundamental
constitutional rights and questions to public law.
In this incipient moment, limiting discussion about AI to
corporate law, without immediately granting it legal personhood,
postpones the ethical, constitutional and public law debate to a further
moment, when, hopefully, those issues will be better addressed.
Bayern’s strategic proposition has spread beyond the
borders of American legal debate. In 2017, Shawn Bayern himself
and other European legal scholars, funded by St. Gallen University,
in Switzerland, developed a study about the suitability of this strategy
to German, Swiss and British legal systems. The conclusions of this
working group were published in an article named Company Law
and Autonomous Systems: A Blueprint for Lawyers, Entrepreneurs, and
Regulators in Hastings Science and Technology Law Journal53.
None of the three above corporate law systems appeared
to have a legal entity with the same flexibility of the American LLC,
although good perspectives were found in German limited liability
company, the GmbH (Gesellschaft mit beschränkter Haftung), the British
limited liability partnership, the LLP, and, to a lesser extent, in the
Swiss foundation, the Stifung.
53 BAYERN, Shawn et al. Company Law and Autonomous Systems: A Blueprint for
Lawyers, Entrepreneurs, and Regulators. Hastings Science and Technology Law Journal.
San Francisco, vol. 9, n. 2, p. 135-161, summer 2017.
If the effort of adapting Bayern’s proposition to different
legal systems was somehow fruitful, we could make the same effort
concerning Brazilian law. Could this proposition be also suitable
for corporate law in Brazil? Indeed, questioning the application of
Bayern’s proposition to other legal systems, as the Brazilian one, was,
in fact, the very objective of his work:
Just as the US, Brazil has its own type of LLC as a corporate
entity, the so-called “Sociedade Limitada”. As a matter of fact, the
Brazilian LLC has an older and more stable tradition in comparison
with US corporate law. If LLC in the US is a product of the late XXth
Century, Brazil, inspired in German law, has incorporated the figure of
LLC more than one hundred years ago, with the edition of the Decree
54 BAYERN, Shawn et al. Company Law and Autonomous Systems: A Blueprint for
Lawyers, Entrepreneurs, and Regulators. Hastings Science and Technology Law Journal.
San Francisco, vol. 9, n. 2, p. 135-161, summer 2017. p. 136.
n. 3.708, from January 10th of 1919. The Decree has regulated the
Brazilian LLC for more than eighty years, until the Brazilian Civil Code
of 2002 came into force in 200355.
The Brazilian Decree n. 3.708 was an answer to the need of
the bourgeoisie (more flexibility, less bureaucracy and limited liability
for all the members).
The main feature of a Brazilian LLC is its flexibility. Its
founding act is a declaration of will which allows the members the
freedom to contract with one another upon whatever terms they
consider are best suited to their business, as long as these terms do
not collide with mandatory requirements of law.
Here we can see some similitudes between the American
LLC as established by RULLCA and the traditional Brazilian LLC.
Said that, if it is thinkable, according to Prof. Bayern’s insight, that
the LLC operating agreement makes it possible for an American LLC
“encapsulates” AI in a way that it can autonomously act juridically, can
the Brazilian LLC’s constitutional document do the same?
The similarities between American LLC and Brazilian
LLC concerning its flexibility seem to indicate a positive answer.
Nonetheless, a safe opinion on the matter depends on a closer
analysis of the LLC regulation of the Brazilian Civil Code and its legal
interpretation.
The main difficulty about using LLC to encapsulate AI in
Brazil is the admission or not of a company without members. In fact,
the Brazilian Civil Code does not refer to an LLC without members. The
LLC’s legal provisions govern the company’s members, their internal
relationship and their interaction with the legal entity. According to
article 981 of the Brazilian Civil Code, companies are defined as a
55 This overview of BRAZILIAN LLC in Brazil is based on: CHAVES, Natália Cristina.
Casamento, divórcio e empresa: questões societárias e patrimoniais. Belo Horizonte:
D’Plácido, 2018. part 1, chapter 1: “Panorama das Sociedades Limitadas no Brasil”. p.
27-52.
contract, an agreement celebrated between two or more people who
reciprocally undertake to contribute, with goods or services, to the
exercise of economic activity and the sharing of the results. So, the
Brazilian LLC, under the provisions of the Civil Code, is traditionally
seen from the perspective of a contract56. Even though the association
of LLC with the contractual view is parti pris in Brazilian corporate
law, having its constitutional act received the name of “contract”
since the edition of the Decree n. 3.708/1919, the theorization about
the contractualism in corporate law as it is now known in Brazil was
imported from mid XXth Century Italian doctrine57.
It was due mainly to the influence of Tullio Ascarelli that
contractual theories were spread in Brazil58. Ascarelli, against the
institutionalist view59, developed the concept of plurilateral contract,
stating a cooperative behavior among partners to achieve a common
goal60.
The contractual view of the Brazilian LLC can be inferred
not only from the referred article 981 but also from other Civil Code’s
provisions. For instance, article 1.052 limits the members’ liability
to the payment of the quotas subscribed. However, all partners are
jointly liable for the complete payment of the corporate capital61.
56 SALOMÃO FILHO, Calixto. O novo direito societário. 4ª ed. São Paulo: Malheiros,
2011. p 38.
57 SALOMÃO FILHO, Calixto. O novo direito societário. 4ª ed. São Paulo: Malheiros,
2011. p. 28-31.
58 BORBA, José Edwaldo Tavares. Direito Societário. 10ª ed. Rio de Janeiro: Renovar,
2007. p. 32.
59 ASCARELLI, Tullio. Problemas das sociedades anônimas e direito comparado. 2ª ed. São
Paulo: Saraiva, 1969. p. 265.
60 ASCARELLI, Tullio. Problemas das sociedades anônimas e direito comparado. 2ª ed. São
Paulo: Saraiva, 1969. p. 266.
61 The original version of this article is reproduced in the following text: “Na socieda-
de limitada, a responsabilidade de cada sócio é restrita ao valor de suas quotas, mas todos
respondem solidariamente pela integralização do capital social”. In free translation: “In
the LLC each member’s liability is limited to the value of its own quotas, but all of them are
responsible for the payment of the corporate capital”.
Article 1.053 states that the legal provisions of the Simple
Company (which has a contractual structure par excellence), are
subsidiarily applicable to the Brazilian LLC62. Due to this, article 997,
which disciplines the constitution of the Simple Company, applies to
the Brazilian LLC, establishing that “The company is constituted by a
written contract, in public or private form, which, in addition to the clauses
stipulated by the parties, will mention: […]”63.
Following article 1.053 of the Civil Code, the subsequent
articles cover the member’s quotas, company’s administration,
general meetings, partial corporate dissolution and, finally, causes
of extinction. Concerning the total dissolution causes, it is worth
noting that until recently the absence of more than one member was
considered a cause of total dissolution of a Brazilian LLC, according
to article 1.033, item IV, of the Civil Code, applicable to this type of
company. This item expressly commands: “the company shall be
dissolved when occur: [...] IV – the absence of a plurality of members for the
time-lapse of one hundred and eighty days”64.
Considering all these legal provisions, the companies’
contractual view seems to be endorsed by the Brazilian legal system.
From this perspective, thinking of Brazilian LLC without any members
would be a contraditio in terminis. Nevertheless, at least in one
hypothetical circumstance, this situation will be applied.
Assuming that all the members of a specific Brazilian LLC
are dead (in an airplane accident, for example) and that the company
is managed by a no partner administrator who keeps the business
62 In the original text: “Art. 1.053. A sociedade limitada rege-se, nas omissões deste Ca-
pítulo, pelas normas da sociedade simples”. In free translation: “Art. 1.053. The LLC is
governed, in the omissions of this chapter, by the rules of the simple company”.
63 Free translation. In the original text: “Art. 997. A sociedade constitui-se mediante con-
trato escrito, particular ou público, que, além de cláusulas estipuladas pelas partes, mencio-
nará:[...]”.
64 Free translation. In the original text: “Art. 1.033. Dissolve-se a sociedade quando ocor-
rer: [...] IV - a falta de pluralidade de sócios, não reconstituída no prazo de cento e oitenta
dias”.
moving on. Are the acts practiced by the administrator after the
members’ death, legally valid? Does the LLC need to be immediately
dissolved?
It is true, according to Brazilian succession law, that all the
possession of a deceased one passes on to its heirs immediately (article
1.784 of the Civil Code65). However, even admitting that patrimonial
rights regarding the member’s quotas in the Brazilian LLC are
immediately transferred to the heirs, the attribute of being a member
does not automatically pass on. An amendment to the constitutional
act is necessary.
Although the situation above shall be considered unusual, it
could really happen. Unfortunately, there is no specific reference in
the Brazilian legal system concerning this practical problem.
The company’s contractual view does not offer a satisfactory
solution. However, the legal landscape started changing in 2011, with
the introduction of the individual limited liability enterprise (EIRELI).
It meant a significant step in the direction of the institutionalist view
of corporate law. In this new scenario, a company with no-member is
more palpable.
65 Free translated as: “Art. 1.784. Once the succession is opened, inheritance is passed on,
therefore, to legitimate heirs and to testamentary ones”. In the official text: “Art. 1.784.
Aberta a sucessão, a herança transmite-se, desde logo, aos herdeiros legítimos e testamentá-
rios”.
jurists then sensible to the importance of corporations not only as a
tool for private thriving, but also as a fundamental asset to a country’s
economic stability. In this sense, the relevance of a corporation,
far beyond a mere contract (an agreement celebrated between the
parties), was emphatically remarked by the doctrine66.
The contours of the institutionalist view were well synthesized
and spread by the French jurist and sociologist Maurice Hauriou. One
of the key names of France’s public law, Hauriou has written an essay
about corporate institutions that have deeply impacted the debate
regarding corporate law. This essay has influenced even Brazilian
jurists, such as Fran Martins67. Hauriou defined three elements of the
corporate institution:
We already know that there are three elements of any
corporate institution: 1) the idea of a work to be done in a social group;
2) the power organized to perform this idea; 3) the communitary
manifestations which are produced in the social group regarding the
idea and its performance68.
The institutionalist view, as theorized by the German
jurists and by Hauriou, focuses on the corporations’ social role. The
importance of a company, as an LLC, for instance, extends beyond the
interests and even the presence of its member. It resides, precisely,
in the social role that this legal entity performs, creating wealth,
providing jobs, fostering innovation, and so on.
66 SALOMÃO FILHO, Calixto. O novo direito societário. 4ª ed. São Paulo: Malheiros,
2011. p. 32-34.
67 CHAVES, Natália Cristina. Casamento, divórcio e empresa: questões societárias e pat-
rimoniais. Belo Horizonte: D’Plácido, 2018. p. 45, note 63.
68 Free translation. In the original text from the Italian edition: “Già sappiamo che sono
tre gli elementi di quasiasi istituzione corporativa: 1) l’idea dell’opera da realizzare in un
gruppo sociale; 2) il potere organizzato per la realizzazione di questa idea; 3) le manifesta-
zioni comunitarie che si producono nel gruppo sociale in rapporto all’idea e alla sua rea-
lizzazione”. (HAURIOU, Maurice. Teoria dell’istituzione e della fondazione. Trasl. Widar
Cesarini Sforza. Milan: Giuffrè, 1967. p. 14).
In the Brazilian legal scenario, Prof. Calixto Salomão Filho,
Full Professor of Commercial Law at the University of São Paulo,
has long sustained, with broad repercussion, that the current
comprehension of corporate law in Brazil must not be restricted to
a mere contractual view. For the author, even in Brazilian law and
despite the first impression of the Civil Code, a corporation is more
than the sum of its members.
Salomão Filho proposes in his well-known works that
corporate law should abandon its contractual original view, turning
to an institutional view. It would be unfair to reduce the author’s
perspective to the strict institutionalism developed in Germany during
the inter-war period, which conceived the corporation solely as a
social tool69. As a matter of fact, Salomão Filho refers to his theory
as organizational, and not properly institutionalist. Nonetheless, even
though Prof. Salomão Filho’s contemporary view is capable to avoid
the common criticism addressed to classical institutionalism, his
theory’s premises and institutionalism are very much alike. The author
himself recognizes it: “the organizational theory, when well applied, is
not a return to the individualism of the contractualists, but, in fact, is a step
forward towards institutionalism in the defense of public interest”70.
This remark is not a critic of Prof. Salomão Filho’s work.
On the contrary, the institutionalist view, in the more contemporary
perspective sustained by Salomão Filho, can be a powerful tool to
solve serious legal issues concerning corporate law, such as the use of
an LLC as a legal receptacle of AI.
69 This paragraph is a summary of Prof. Calixto Salomão Filho’s ideas published in the
book: SALOMÃO FILHO, Calixto. O novo direito societário. 4ª ed. São Paulo: Malheiros,
2011. p. 27-51.
70 Free translation. In the original text: “a teoria organizativa, quando bem aplicada,
não é um retorno ao individualismo dos contratualistas, mas sim um passo avante em rela-
ção ao institucionalismo na defesa do interesse público”. SALOMÃO FILHO, Calixto. O novo
direito societário. 4ª ed. São Paulo: Malheiros, 2011. p. 52.
In the Brazilian legal system, the path to the institutionalist
view began to be covered in a more significant way in 2011, with the
introduction of a legal entity constituted of a sole member: the EIRELI.
Although this legal person wasn’t defined as a company, it remained
(and still is) halfway between that legal structure (a company) and an
individual businessman, distancing from the contractual approach.
The EIRELI was incorporated in the article 980-A of the
Brazilian Civil Code by the Federal Law n. 12.441 of 2011. Its doctrinal
formulation is deeply controversial71. Despite not being a company or a
corporation, this new legal entity, owned by one sole person (natural or
legal) is governed, in the omissions of the article 980-A of the Brazilian
Civil Code, by the rules of the Brazilian LLC. So, the responsibility of
the sole member is limited. However, unlike the Brazilian LLC, an
EIRELI requires a capital of at least a hundred minimum wages.
After the EIRELI, came the single-member law firms. This
legal person was incorporated by Federal Law n. 13.247 of 2016 in the
Brazilian Bar Statute (Federal Law 8.906/1994) to authorize individual
lawyers to create a legal person to exercise its legal activities. With
regards to the single-member law firm, the legislation has considered
this new entity as a type of company, moving a little further in the
direction of the institutionalist view.
If it is impossible for one person to celebrate an agreement
with itself and, even though, this sole person can create an EIRELI
or even constitute a single-member law firm, those entities are not
contractual in nature. That is why their creation by law represented a
break with the contractual perspective.
This paradigm shift was accelerated with the acceptance
of a Brazilian LLC composed of a sole member. This possibility was
72 According to the first paragraph: “[...]§1º A sociedade limitada pode ser constituída por
1 (uma) ou mais pessoas”. Free translation: “[...] §1º The LLC can be constituted by one
person or more”.
73 According to article 4º of Normative Instruction DREI n. 63/19, the article 1.033,
item IV, of the Civil Code does not apply to single-member LLC. In the original text:
“Não se aplica às sociedades limitadas, que estiverem em condição de unipessoalidade, o
disposto no inciso IV do art. 1.033 do Código Civil”. (BRAZIL. Ministério da Economia.
Normative Instruction DREI 63, 2019. Available at: <http://www.mdic.gov.br>. Accessed
on 30 Jan. 2020).
its pure organizational value, meaning that it has by object
only the structuring of a bundle of contracts74.
74 Free translation. In the original text: “Uma vez vista a sociedade como organização e
não como uma pluralidade de sócios é bastante evidente como tanto a sociedade unipessoal
como a sociedade sem sócio são admissíveis. Aliás, é nessas estruturas que o contrato que dá
vida à sociedade adquire seu valor organizativo puro, ou seja, passa a ter como objeto ex-
clusivamente estruturar um feixe de contratos”. SALOMÃO FILHO, Calixto. O novo direito
societário. 4ª ed. São Paulo: Malheiros, 2011. p. 50.
75 “[...] risking scandalizing many, I confidently take a step further in the way of institu-
Back then, Borges already defended the institutionalist
comprehension of corporate law, based on mid XXth Century German
theories. According to him, the relevance of the continuity of business
justified its maintenance even if the company was reduced to one
member or no member at all76.
The scenery depicted by Borges about how a Brazilian LLC
could get rid of all members by acquiring its own quotas is slightly
different in contemporary Brazilian corporate law. In the past, article
8º of Decree n. 3.708/1919 expressly authorized the LLC to acquire
its own quotas. Nonetheless, as previously said, when, in 2003, the
Brazilian Civil Code entered into force, the Decree n. 3.708/1919 was
revoked, and the new Code did not and still does not have a similar
disposition.
The lacuna in the Civil Code has raised some doubts about
the possibility of the LLC being a member of itself, negotiating with its
own quotas. The Journey of Private Law, an assembly of legal scholars
that enacts briefings about controversial questions in Brazilian Private
law, has opined that “the LLC can acquire its own quotas observing the
tionalizing LLC; among us, it could occasionally exist, not only with a sole member, but
without any member at all. The LLC can acquire its own quotas, according to article 8º of
the Decree n. 3.708, quotas which the company can conserve with itself in order to
ulterior cession or resale. There is not any juridical impossibility in the occurrence of such
phenomenon: a LLC that, having acquired, observing legal formalities, all of its own quotas
transforms itself in a company with no-members”. (BORGES, João Eunápio. Sociedade por
quotas – liquidação. Revista Forense, São Paulo, y. 63, i. 763-764-765, v. 217, jan./mar.
1967). Free translation. In the original text: “[...] embora correndo o risco de escandalizar
a muitos, dou convictamente um passo a mais no caminho da institucionalização da socie-
dade por quotas de responsabilidade limitada; entre nós ela poderá existir ocasionalmente,
não apenas com sócio único. Mas sem qualquer sócio... Podendo ela adquirir as próprias
quotas, nos têrmos do art. 8º do Decreto n. 3.708, quotas que ela pode conservar em cartei-
ra para ulterior cessão ou revenda, não existe juridicamente, nenhuma impossibilidade na
ocorrência de tal fenômeno: uma sociedade por quotas de responsabilidade limitada que,
havendo adquirido, com estrita observância de todas as formalidade legais, totalidade de
suas quotas transformou-se em uma sociedade sem sócios”. (BORGES, João Eunápio. So-
ciedade por quotas – liquidação. Revista Forense, São Paulo, y. 63, i. 763-764-765, v. 217,
jan./mar. 1967).
76 CHAVES, Natália Cristina. O menor empresário na sociedade limitada unipessoal.
Revista de Direito Empresarial, Curitiba, n. 3, jan./jun. 2005. p. 143.
conditions of the Law of Business Corporations”77 (Federal Law n. 6.404/76).
At first, the National Department of Business Register and Integration
(DREI) refused the idea of a Brazilian LLC acquiring its own quotas
(Normative Instruction DREI n. 10/201378). In 2017, though, the DREI
has changed its opinion, enacting the Normative Instruction DREI n.
38, which is still in force79. This new Instruction, in its item 3.2.6.1,
admits that Brazilian LLC can acquire its own quotas if its constitutional
act provides that the company will be additionally governed by the
rules of corporation, especially Federal Law n. 6.404/7680.
Accepting that an LLC can acquire its own quotas, makes
possible the adaptation of Prof. Bayern’s proposal to the Brazilian
landscape.
This possibility is reinforced by the absence of a rule
demanding the presence of members as a requirement for the
company’s juridical validity after its creation. In fact, the existence of
one or more members is just required for the constitution of the LLC,
but not for its maintenance from then on.
Brazilian private law doctrine, influenced by the jurist Pontes
de Miranda, usually segments the juridical acts (“negócios jurídicos”),
such as the constitution of a company, in three steps: existence, validity
and efficacy81. In this line of thought, a company needs a member-only
to validly come into existence. After its creation, the referred company
can exercise its activities, reaching juridical efficacy.
77 Free transalation. In the original text: “A sociedade limitada pode adquirir suas pró-
prias quotas, observadas as condições estabelecidas na Lei das Sociedades por Ações”. CON-
SELHO DA JUSTIÇA FEDERAL. IV Jornada de Direito Civil, Enunciado 391. Brasília,
2006.
78 BRAZIL. Ministério da Economia. Normative Instruction DREI 10, 2013. Available at:
<http://www.mdic.gov.br>. Accessed on 30 Jan. 2020.
79 BRAZIL. Ministério da Economia. Normative Instruction DREI 38, 2017. Available at:
<http://www.mdic.gov.br>. Accessed on 30 Jan. 2020.
80 In Private Law, what is not prohibited is permitted. So, even without this provision,
in our opinion, the LLC could acquire its own quotas.
81 PONTES DE MIRANDA, Francisco Cavalcanti. Tratado das Ações. V. 1. São Paulo: RT,
1970. p. 4.
In addition, Brazilian corporate law is facing a movement
towards the direction of more economic freedom. The Federal Law
n. 13.874/2019, known as the Law of Economic Freedom, reduces
state intervention in private businesses. One of its principles is the
subsidiary and exceptional state intervention in the exercise of
economic activities (article 2º, item III). The free will of parties plays
an important role. In entrepreneurial business, party autonomy
prevails over Brazilian business law, except in matters of public order
(article 3º, item VIII).
Considering this movement and the absence of a rule
prohibiting the maintenance of a company without a member or
prohibiting the company as a member of itself, the application of
Bayern’s theory in Brazil becomes more tangible.
There is just one rule in Brazilian corporate law that seems
to present somehow an obstacle to AI’s maximum potential of acting
juridically. Article 1.060 of the Brazilian Civil Code provides that the
LLC administrator must be a person (a member or a non-member).
Besides, article 997 of the same Code (which is applied to LLC)
requires that the company must be managed by a natural person.
There is some conundrum about this matter, especially because, in
some situations, as in bankruptcy, per example, the company can be
driven by a legal person82. The current position of the DREI, in the
Normative Instruction n. 38 of 2017 (item 1.2.8, b), an LLC cannot be
administrated by a legal person. But be that as it may, the requirement
of a natural person to administrate an LLC, even if not a member,
represents a substantial restriction to AI autonomy in a company, as it
will not be able to manage.
On the other hand, this obstacle solves one of the crucial
juridical problems of AI: responsibility83. Ordinarily, the responsibility
82 Article 21 of Brazilian bankruptcy law provides that the judicial administrator can
be a specialized legal person.
83 CHAVES, Natália Cristina. Inteligência Artificial: os novos rumos da responsabili-
for the AI acts will be taken by the LCC. As AI’s commands will be
juridically performed in the name of the LLC, the company will be
responsible for any contractual or tortious damages that may occur
under various conditions.
Nonetheless, it is undeniable that, in some cases, abuses
can happen and measures must be taken in order to avoid that unjust
damages remain unpaid. In this context, a human administrator can
be the solution to the problem of liability.
In consonance with article 50 of the Brazilian Civil Code,
Federal Law n. 13.874/20198485, in case of abuse of legal personality, the
dade civil. In: GONÇALVES, Anabela Susana de Sousa et al. (Coords.). Direito civil con-
temporâneo. Braga: CONPEDI, 2017.
84 “Art. 50. In case of abuse of legal personality, characterized by deviance of function,
patrimonial confusion, the judge may, by requirement of the interested party or the
public attorney office, when its intervention is required, pierce the corporate veil in
order that the effects of certain and determined duties be extended to the particular
belongings of the administrators or members who were directly or indirectly benefi-
ciaries of the abuse.
§ 1º With regard to this article, deviance of function is the use of the legal personality
with the purpose of damaging creditors or practicing unlawful acts of any kind.
§ 2º Patrimonial confusion means the absence of factual discrimination between the
belongings of the legal persons and its members, such as:
I - frequent performance of the obligations of the members or administrators by the
legal person, vice versa;
II - transferring of assets and liabilities without actual counterpart, except for propor-
tionally insignificant amounts;
III - other acts of noncompliance related to patrimonial autonomy”.
85 Free translation. In the original text: “Art. 50. Em caso de abuso da personalidade
jurídica, caracterizado pelo desvio de finalidade ou pela confusão patrimonial, pode o juiz, a
requerimento da parte, ou do Ministério Público quando lhe couber intervir no processo, des-
considerá-la para que os efeitos de certas e determinadas relações de obrigações sejam esten-
didos aos bens particulares de administradores ou de sócios da pessoa jurídica beneficiados
direta ou indiretamente pelo abuso. (Redação dada pela Lei nº 13.874, de 2019) § 1º Para os
fins do disposto neste artigo, desvio de finalidade é a utilização da pessoa jurídica com o pro-
pósito de lesar credores e para a prática de atos ilícitos de qualquer natureza. (Incluído pela
Lei nº 13.874, de 2019) § 2º Entende-se por confusão patrimonial a ausência de separação de
fato entre os patrimônios, caracterizada por: (Incluído pela Lei nº 13.874, de 2019) I - cum-
primento repetitivo pela sociedade de obrigações do sócio ou do administrador ou vice-versa;
(Incluído pela Lei nº 13.874, de 2019) II - transferência de ativos ou de passivos sem efetivas
contraprestações, exceto os de valor proporcionalmente insignificante; e (Incluído pela Lei nº
13.874, de 2019) III - outros atos de descumprimento da autonomia patrimonial. (Incluído
liability for unlawful acts can be extended to the particular belongings
of the administrator.
So, even if the company has no member or even if it is a
member of itself, operated, in both cases, by an autonomous system,
any damage that a person may suffer due to an unlawful act will be
compensated (by the company or the administrator).
These provisions (articles 1.060 and 50 of the Civil Code) can
be considered a safety measure, the well-known “human-in-the-loop”
solution. In other words, even though recognizing the AI autonomy, at
some point, a human being intervenes to avoid further damages.
Conclusion
86 LOPUCKI, Lynn M. Algorithmic entities. Washington University Law Review, St. Lou-
is, vol. 95, iss. 4, p. 887-953, 2018.
Cass Sunstein as those rules edited in a hurry to address the worries
of the people87.
At the end of the day, the evolution turns out to be inevitable
and all this apocalyptical fear represents nothing more than a pointless
angst. As Yuval Noah Harari said in Sapiens:
References
87 BECKER, Daniel; FERRARI, Isabela; ARAUJO, Daniel. Regulation against the ma-
chine: críticas ao PL que busca regular inteligência artificial no Brasil. JOTA Bulletin,
Technology Issue, São Paulo, 2019.
88 HARARI, Yuval Noah. Sapiens: a brief history of humankind. Toronto: Penguin Ran-
dom House, 2014. p. 412.
89 HARARI, Yuval Noah. Sapiens: a brief history of humankind. Toronto: Penguin Ran-
dom House, 2014. p. 414.
BAYERN, Shawn. The Implications of Modern Business-Entity
Law for the Regulation of Autonomous Systems. Stanford Technology
Law Review. Palo Alto, iss. 19, p. 93-112, 2015.
BAYERN, Shawn et al. Company Law and Autonomous
Systems: A Blueprint for Lawyers, Entrepreneurs, and Regulators.
Hastings Science and Technology Law Journal. San Francisco, vol. 9, n. 2,
p. 135-161, summer 2017.
BECKER, Daniel; FERRARI, Isabela; ARAUJO, Daniel.
Regulation against the machine: críticas ao PL que busca regular
inteligência artificial no Brasil. JOTA Bulletin, Technology Issue, São
Paulo, 2019.
BORBA, José Edwaldo Tavares. Direito Societário. 10ª ed. Rio
de Janeiro: Renovar, 2007.
BORGES, João Eunápio. Sociedade por quotas – liquidação.
Revista Forense, São Paulo, y. 63, i. 763-764-765, v. 217, jan./mar. 1967.
BRAZIL. Ministério da Economia. Normative Instruction DREI
10, 2013. Available at: <http://www.mdic.gov.br>. Accessed on 30 Jan.
2020.
BRAZIL. Ministério da Economia. Normative Instruction DREI
38, 2017. Available at: <http://www.mdic.gov.br>. Accessed on 30 Jan.
2020.
BRAZIL. Ministério da Economia. Normative Instruction DREI
63, 2019. Available at: <http://www.mdic.gov.br>. Accessed on 30 Jan.
2020.
CHAVES, Natália Cristina. Casamento, divórcio e empresa:
questões societárias e patrimoniais. Belo Horizonte: D’Plácido, 2018.
CHAVES, Natália Cristina. O menor empresário na sociedade
limitada unipessoal. Revista de Direito Empresarial, Curitiba, n. 3, jan./
jun. 2005.
CHAVES, Natália Cristina. Inteligência Artificial: os novos
rumos da responsabilidade civil. In: GONÇALVES, Anabela Susana
de Sousa et al. (Coords.). Direito civil contemporâneo. Braga: CONPEDI,
2017.
CONSELHO DA JUSTIÇA FEDERAL. IV Jornada de Direito
Civil, Enunciado 391. Brasília, 2006.
EUROPEAN PARLIAMENT. Resolution of 16 February 2017
with recommendations to the commission on civil law rules on robotics.
Brussels, 2017.
GONÇALVES, Oksandro. EIRELI - Empresa Individual de
Responsabilidade Limitada. In: Celso Fernandes Campilongo, Alvaro
de Azevedo Gonzaga e André Luiz Freire (Coords.). Enciclopédia jurídica
da PUC-SP. Issue: Direito Comercial. Fábio Ulhoa Coelho, Marcus
Elidius Michelli de Almeida (Issue coords.). São Paulo: Pontifícia
Universidade Católica de São Paulo, 2017.
HALLEVY, Gabriel. The criminal liability of artificial
intelligence entities - from science fiction to legal social control. Akron
Intellectual Property Journal, Akron, vol. 4, i. 2, 2010.
HARARI, Yuval Noah. Sapiens: a brief history of humankind.
Toronto: Penguin Random House, 2014.
HAURIOU, Maurice. Teoria dell’istituzione e della fondazione.
Trasl. Widar Cesarini Sforza. Milan: Giuffrè, 1967.
LOPUCKI, Lynn M. Algorithmic entities. Washington University
Law Review, St. Louis, vol. 95, iss. 4, p. 887-953, 2018.
PAN, Yunhe. Heading toward artificial intelligence 2.0.
Engineering, Pequim, n. 2, p. 409-413, 2016.
PARENTONI, Leonardo Netto; GONTIJO, Bruno Miranda.
Competência legislativa em Direito Societário: Sistemas brasileiro,
norte-americano e comunitário europeu. Revista de Informação
Legislativa, Brasília, y. 53, n. 210, p. 239-265, apr./jun. 2016.
PARENTONI, Leonardo Netto. Sociedade limitada:
algumas das principais diferenças entre as legislações brasileira e
estadunidense. Revista Opinião Jurídica, Fortaleza, y. 17, n. 24, p.72-98,
jan./apr. 2019.
PONTES DE MIRANDA, Francisco Cavalcanti. Tratado das
Ações. V. 1. São Paulo: RT, 1970.
SALOMÃO FILHO, Calixto. O novo direito societário. 4ª ed. São
Paulo: Malheiros, 2011.
SOUZA, Carlos Affonso Pereira de; OLIVEIRA, Jordan Vinícius
de. Sobre os ombros de robôs? A inteligência artificial entre fascínios
e desilusões. In: FRAZÃO, Ana; MULHOLLAND, Caitlin. Inteligência
artificial e direito: ética, regulação e responsabilidade. São Paulo:
Revista dos Tribunais, 2019.
UNIFORM LAW COMMISSION. Prefatory Note to ULLCA (2006).
Chicago, 2006.
WEINRIB, Ernst. J. The Idea of Private Law. 2 ed. Oxford:
Oxford University Press, 2012.
On The Waterfront: Personal And Non-personal Data At
Both Eu Regulations90
Keywords
93 Also having in mind the EU Archipelago of Intellectual Property Acts, with a reef,
Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004,
on the enforcement
of intellectual property rights; sandbanks, as Directive 2001/29/EC of the European Par-
liament and of the Council of 22 May 2001, on the harmonisation of certain aspects of
copyright and related rights in the information society, and Directive (EU) 2019/790 of
the European Parliament and of the Council of 17 April 2019, on copyright and related
rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC;
and some islands apart like Council Directive 91/250/EEC of 14 May 1991, on the legal
protection of computer programs, Directive 96/9/EC of the European Parliament and
of the Council of 11 March 1996, on the legal protection of databases, Directive 98/71/
EC, of the European Parliament and of the Council of 13 October 1998, on the legal
protection of designs, also Council Regulation (EC) No 6/2002 of 12 December 2001, on
Community designs, Directive 98/44/EC of the European Parliament and of the Coun-
cil of 6 July 1998, on the legal protection of biotechnological inventions, Regulation
(EU) No 1257/2012 of the European Parliament and of the Council of 17 December
2012, implementing enhanced cooperation in the area of the creation of unitary pat-
ent protection, both complementing the Convention on the Grant of European Patents,
of 5 October 1973, Regulation (EU) 2017/1001 of the European Parliament and of the
Council of 14 June 2017, on the European Union trade mark, Directive (EU) 2015/2436
of the European Parliament and of the Council of 16 December 2015, to approximate
the laws of the Member States relating to trade marks, Regulation (EU) No 1151/2012 of
the European Parliament and of the Council of 21 November 2012 on quality schemes
for agricultural products and foodstuffs and Regulation (EU) 2018/848 of the European
Parliament and of the Council of 30 May 2018, on organic production and labelling of
organic products; and a marsh, Directive (EU) 2016/943 of the European Parliament
and of the Council of 8 June 2016, on the protection of undisclosed know-how and
business information (trade secrets) against their unlawful acquisition, use and dis-
closure.
2. Even on Wetlands
94 Namely, after Case C‑582/14, Patrick Breyer, of 19 October 2016, reiterated at Case
C-434/16, Peter Nowak, of 20 December 2017, preceded by Article 29 Working Party
Opinion 4/2007, on the concept of personal data, of 20 June 2007. About these issues,
Paul SCHWARTZ and Daniel SOLOVE (2011), Frederik Zuiderveen BORGESIUS (2017),
Nadezhda PURTOVA (2018) and Lorenzo dalla CORTE (2019).
example as a result of their deployment in automated industrial
production processes. Specific examples of non-personal data include
aggregate and anonymized datasets used for big data analytics, data
on precision farming that can help to monitor and optimize the use
of pesticides and water, or data on maintenance needs for industrial
machines.” (Recital 9).
However, the GDPR keeps a strong vis atractiva. So, “In the case
of a data set composed of both personal and non-personal data, this
Regulation applies to the non-personal data part of the data set. Where
personal and non-personal data in a data set are inextricably linked,
this Regulation shall not prejudice the application of Regulation (EU)
2016/679” (Article 2.2).
95 On the tension concerning open data, the reuse of public sector data and data pro-
tection, Katleen JANSSEN and Sara HUGELIER (2013).
96 On the issue, Paul SCHWARTZ and Daniel SOLOVE (2011), again Daniel SOLOVE
(2014), Samson Y. ESAYAS (2015), Sophie STALLA-BOURDILLON and Alison KNIGHT
(2017), and also, from technological perspective, Arvind NARAYANAN and Vitaly
4. Precautions To Take Before Boarding
SHMATIKOV (2008); and a special attention has to be provided to Big Data Analyt-
ics, as shown by Benjamin HABEGGER et al. (2014), Jens-Erik MAI (2016), Alessandro
MANTELERO, (2016), Nils GRUSCHKA et al. (2018), and also by my paper with Cristia-
na Teixeira SANTOS (2019).
97 For the role performed by these evaluations, Niels van DIJK, Raphaël GELLERT and
Kjetil ROMMETVEIT (2016), as well as Raphaël GELLERT (2018).
98 About its scope, besides Article 29 Working Party Opinion 3/2010 on the principle
of accountability, of 13 July 2010, Lachlan URQUHART, Tom LODGE and Andy CRAB-
TREE (2019).
99 Besides the reports commissioned by ENISA to George DANESIS et al. (2014), to Gi-
useppe D’ACQUISTO et al. (2015) and to Marit HANSEN and Konstantinos LIMNIOTIS
(2018), the papers by Lee A. BYGRAVE (2017), Irene KAMARA (2017) and Filippo A.
RASO (2018).
100 For a synthesis, Niels van DIJK, Raphaël GELLERT and Kjetil ROMMETVEIT
(2016), notwithstanding the Guidelines on Data Protection Impact Assessment (DPIA)
and determining whether processing is “likely to result in a high risk” for the purposes
of Regulation 2016/679, from the Article 29 Working Party, of 4 April 2017, revised on
4 October 2017.
design and by default” and at Article 32.2 in relation to the “security of
processing”) could be utterly relevant in order to avoid major rocks101.
A completing tool could be, when available, a “European
cybersecurity certification scheme”, particularly one providing a
‘substantial’ or a ‘high’ assurance level (as at Art. 52 of Regulation
(EU) 2019/881 the European Parliament and of the Council of 17
April 2019, on ENISA (the European Union Agency for Cybersecurity)
and information and communications technology cybersecurity
certification (Cybersecurity Act)102.
101 Apart from the very recent Guidelines 1/2018 on certification and identifying cer-
tification criteria in accordance with Articles 42 and 43 of the Regulation (Version
3.0), of 3 June 2019, adopted by the European Data Protection Board, for a general
approach to this subject, Giovanni Maria RICCIO and Federica PEZZA (2018), as well
as Eric LACHAUD (2018).
102 On the European Union Cybersecurity framework, Helena CARRAPIÇO and André
BARRINHA (2017) and (2018), more specifically but from a somewhat outdated per-
spective, Roksana MOORE (2013), while Christopher CUNER et al. (2017) put the focus
on its connections with data protection.
(Article 32.1 a)103, at least, in order to limit the consequences of a
“personal data breach” (Article 34.3 a) and Article 4 12)104.
References
103 About these, Gerald SPINDLER and Philipp SCHMECHEL (2016), in general, as
well as Samson Y. ESAYAS (2015), for the precise context.
104 About the scope of the rules regarding these security incidents, Stephanie von
MALTZAN (2019).
D’ACQUISTO, Giuseppe et al. (2015). Privacy by design in big
data - An overview of privacy enhancing technologies in the era of big
data analytics, ENISA - European Union Agency for Cybersecurity.
DIJK, Niels van; GELLERT, Raphaël; ROMMETVEIT, Kjetil
(2016), “A risk to a right? Beyond data protection risk assessments”,
Computer Law & Security Review, Vol. 32- 2, pp. 286-306.
ESAYAS, Samson Yoseph (2015), “The role of anonymisation
and pseudonymisation under the EU data privacy rules: beyond the
‘all or nothing’ approach”, European Journal of Law and Technology,
Vol. 6-2.
GELLERT, Raphaël (2018), “Understanding the notion of
risk in the General Data Protection Regulation”, Computer Law and
Security Review, Vol. 34-2, pp. 279-288.
GRUSCHKA, Nils et al. (2018), “Privacy Issues and Data
Protection in Big Data: A Case Study Analysis under GDPR”, Proceedings
of the 2018 IEEE International Conference on Big Data, Seattle.
HABEGGER, Benjamin et al. (2014), “Personalization vs.
Privacy in Big Data Analysis”, International Journal of Big Data, Vol.
1, pp. 25-35.
HANSEN, Marit; LIMNIOTIS, Konstantinos (2018),
Recommendations on shaping technology according to GDPR
provisions - Exploring the notion of data protection by default, ENISA
– European Union Agency for Cybersecurity.
JANSSEN, Katleen; HUGELIER, Sara (2013), “Open data as the
standard for Europe? A critical analysis of the European Commission’s
proposal to amend the PSI Directive”, European Journal of Law and
Technology, Vol. 4-3.
KAMARA, Irene (2017), “Co-regulation in EU personal data
protection: the case of technical standards and the privacy by design
standardisation ‘mandate’”. European Journal of Law and Technology,
Vol. 8-1.
LACHAUD, Eric (2018), “The General Data Protection
Regulation Contributes to the Rise of Certification as Regulatory
Instrument”, Computer Law and Security Review, Vol. 43-2, pp. 244-
256.
MAI, Jens-Erik (2016), “Big data privacy: The datafication of
personal information”. The Information Society, Vol. n. 32-3, pp. 192-
199.
MALTZAN, Stephanie von (2019), “No Contradiction Between
Cyber-Security and Data Protection? Designing a Data Protection
Compliant Incident Response System”, European Journal of Law and
Technology, Vol. 10-1.
MANTELERO, Alessandro (2016), “Personal data for decisional
purposes in the age of analytics: From an individual to a collective
dimension of data protection”, Computer Law & Security Review, Vol.
22-2, pp. 238-255.
MASSENO, Manuel David; SANTOS, Cristiana Teixeira (2019),
“Personalization and profiling of tourists in smart tourism destinations
- a data protection perspective”, International Journal of Information
Systems and Tourism, Vol. 4-2, pp. 7-23.
MOORE, Roksana (2013), “The Case for Regulating Quality
within Computer Security Applications”. European Journal of Law and
Technology, Vol. 4-3.
NARAYANAN, Arvind; SHMATIKOV, Vitaly (2008), “Robust De-
anonymization of Large Sparse Datasets”, 2008 IEEE Symposium on
Security and Privacy, Oakland;
OHM, Paul (2010), “Broken Promises of Privacy: Responding
to the Surprising Failure of Anonymization”, UCLA Law Review, Vol.
57, pp. 1701-1777.
PURTOVA, Nadezhda (2018), “The Law of Everything. Broad
Concept of Personal Data and Future of EU Data Protection Law”, Law,
Innovation and Technology, Vol. 10-1, pp. 40-81.
RASO, Filippo A. (2018), “Innovating in Uncertainty: Effective
Compliance and the GDPR”, Harvard Journal of Law & Technology
Digest.
RICCIO, Giovanni Maria; PEZZA, Federica (2018),
“Certification Mechanism as a Tool for the Unification of the Data
Protection European Law”, MediaLaws – Rivista di diritto dei media,
n. 1, pp. 249-260.
ROCHER, Luc; HENDRICKX, Julien M.; MONTJOYE, Yves-
Alexandre de (2019), “Estimating the success of re-identifications
in incomplete datasets using generative models”, Nature
Communications, Vol. 10.
SCHWARTZ, Paul; SOLOVE, Daniel (2011), “The PII Problem:
Privacy and a New Concept of Personally Identifiable Information”,
New York University Law Review, Vol. 86, pp. 1814-1894.
IDEM (2014), “Reconciling Personal Information in the United
States and European Union”, California Law Review, Vol. 102, pp. 877-
916.
SPINDLER, Gerald; SCHMECHEL, Philipp (2016), “Personal
Data and Encryption in the European General Data Protection
Regulation”, JIPITEC - Journal of Intellectual Property, Information
Technology and E-Commerce Law, Vol. 7.
STALLA-BOURDILLON, Sophie; KNIGHT, Alison (2017),
“Anonymous Data v. Personal Data - A False Debate: An EU Perspective
on Anonymization, Pseudonymization and Personal Data”, Wisconsin
International Law Journal, Vol. 34-2, pp. 285-322.
URQUHART, Lachlan; LODGE, Tom; CRABTREE, Andy
(2019), “Demonstrably doing accountability in the Internet of Things”,
International Journal of Law and Information Technology, Vol. 27-1,
pp. 1-27.
Privacy And Personal Data Protection: Uses And Misuses Of
Personal Data Under Brazilian Law
105 apud BRANDEIS, Louis; WARREN, Samuel. The right to privacy. https://www.
cs.cornell.edu/~shmat/courses/cs5436/warren-brandeis.pdf
form, and the remedies for violation of it also simple, but
is not true in a more civilized state, when the relations of
life and the interests arising therefrom are complicated.
106 As seen in the following U.S. Supreme Court cases: United States vs. Olmstead and
United States vs. Katz.
107 http://cdn.loc.gov/service/ll/usrep/usrep489/usrep489749/usrep489749.pdf Avail.
mar. 08 2020.
108 https://www.foia.gov/foia-statute.html Avail. mar. 08 2020.
... in many contexts, the fact that information is not
freely available is no reason to exempt that information
from a statute generally requiring its dissemination.
Nevertheless, the issue here is whether the compilation
of otherwise hard-to-obtain information alters the privacy
interest implicated by disclosure of that information.
There is a vast difference between the public records that
might be found after a diligent search of courthouse files,
county archives, and local police stations throughout the
country and a computerized summary located in a single
clearinghouse of information.
109 Known as the Data Protection General Act, which stands for the initials LGPD in
portuguese.
this matter had been regulated until then by the intersections of the
Federal Constitution, the Civil Code, the Consumer Protection Code
and the Internet Bill of Rights, like a patchwork under the mantle of
human dignity.
However, the absence of a specific Law did not prevent
Brazilian courts from deciding cases of abuse in the processing of
data and deciding on it, as well as allowing the Government to act
in investigating violations of consumer relations in practices that are
offensive to privacy.
Conclusion
References
Abstract
This text explores the alternatives adopted by the Brazilian
Data Protection Regulation regarding the regulation of automated
decision-making and its potential for violation of data subjects’ rights.
We will briefly introduce the risks of profiling and data mining and
the shortcomings of an approach focused only on empowering the
individual. Afterward, the regulation brought by the Brazilian law on
the topic will be presented, where we argue that there is potential for
both a collective protective system and tools for preemptive action to
assure the rights and principles established by the law.
Key Words
Brazilian General Data Protection Law; Automated Decision-
making; Profiling
Introduction: Data Protection Regulation in Brazil
110 BIONI, Bruno. Proteção de Dados Pessoais: a função e os limites do consentimento. Rio
de Janeiro: Forense, 2019.
data protection111. In other words, we believe it is possible to argue
that the LGPD no longer understands the rights it prescribes as solely
individual rights, as it expressly provides for the exercise of collective
actions for the protection of those rights. In addition, we argue that
Brazilian law has promising instruments for combating discrimination
generated by data processing activities.
We propose to develop these core ideas in the remainder of
the text by studying the regulation of automated decision-making and
the right to an explanation. There are several debates occurring in
Europe regarding the existence or not of this right in the GDPR112. We
expect a similar debate to occur in the Brazilian scenario, and because
of that some ideas and propositions will be presented.
In addition to the discussion of whether or not the right to an
explanation exists, there is a broader debate about the legal safeguards
that fall on automated decision-making, notably GDPR’s Article 22 and
Article 20 of the Brazilian Data Protection Law, and its effectiveness.
We start from the hypothesis that the exercise of the right to
an explanation on an individual scale may not be the best instrument
of protection when confronted with decisions that are often made (or
at least have an effect) at a collective level113.
111 MANTELERO, Alessandro. Personal Data for Decisional Purposes in the Age of
Analyitics: From an individual to a collective dimension of data protection. Computer
Law & Security Review, v. 32, n. 2, p. 238-255, 2016.; MITTELSTADT, Brent. From Indi-
vidual to Group Privacy in Big Data Analytics, Philosophy & Technology, v. 30, n.4, Dec.
2019, p. 475–494.; TAYLOR, Linnet; FLORIDI, Luciano; VAN DER SLOOT, Bart (Eds.).
Group Privacy: New Challenges of Data Technologies. Springer International Publishing,
2017.
112 GOODMAN, Bryce.; FLAXMAN, Seth. European Union Regulations on Algorith-
mic Decision-Making and a “Right to Explanation”. AI Magazine, v. 38, n. 3, 2017, p.
50–57.; SELBST, Andrew D.; POWELS, Julia. Meaningful Information and the Right to
Explanation. International Data Privacy Law. Oxford: Oxford University Press. v. 07,
n. 04, Nov. 2017, p. 233-242.; WACHTER, Sandra MITTELSTADT, Brent, FLORIDI, Lu-
ciano. Why a Right to Explanation of Automated Decision-Making Does Not Exist in
the General Data Protection Regulation, International Data Privacy Law, v. 7, n. 2, May
2017, p. 76–99.
113 MANTELERO, Alessandro. Personal Data for Decisional Purposes in the Age of
Analyitics: From an individual to a collective dimension of data protection. Computer
In the first part of the text, we will address some of the
problems and rights violations that fully automated decision-making
can cause, expecting to demonstrate that the understanding of
these activities goes beyond the individual’s boundary in a way that
neither their regulation nor the exercise of data subjects rights can be
restricted to the individual scale.
In the second part, we will provide an overview of safeguards
on automated decision-making, focused on Article 22 of the GDPR,
and then draw parallels with what the Brazilian law brings as
alternatives, pointing out differences and similarities. We argue that
although the LGPD provides a weaker regulation than the GDPR, some
of its provisions, notably the possibility of exercising data subjects
rights on a collective scale, the principle of non-discrimination and
transparency, and the possibility of reversal of the burden of proof,
appear as promising instruments in the regulation and control of
automated decision-making.
Law & Security Review, v. 32, n. 2, p. 238-255, 2016.; ROUVROY, Antoinette. “Of Data and
Men”. Fundamental Rights and Freedoms in a World of Big Data. Council of Europe,
Directorate General of Human Rights and Rule of Law. vol. T-PD-BUR (2015)09REV, 2016,
2016, p. 1-37. EDWARDS, Lilian; VEALE, Michael. Slave to the Algorithm? Why a ‘Right
to an Explanation’ Is Probably Not the Remedy You Are Looking For, Duke Law & Tech-
nology Review, v. 16, n.1, 2017, p. 18-84.
human or nonhuman subject (individual or group) and/
or the application of profiles (sets of correlated data) to
individuate and represent a subject or to identify a subject
as a member of a group or category114.
114 HILDEBRANDT, Mireille. Defining Profiling: A New Type of Knowledge? In: HIL-
DEBRANDT, M.; GUTWIRTH, S. (Eds.) Profiling the European Citizen: Cross-Disciplinary
Perspectives. Cham/SWI: Springer Science, 2008, p. 19.
115 MITTELSTADT, Brent et al. The ethics of algorithms: Mapping the debate. Big Data
& Society, v. 3, n. 2, 2016, p 1-21.
116 MARTINS, Pedro, HOSNI, David. O Livre Desenvolvimento da Identidade Pessoal
em meio ditigal: Para Além da Proteção da Privacidade?. In: POLIDO, Fabrício, ANJOS,
Lucas, BRANDÃO, Luíza (Orgs.). Políticas, Internet e Sociedade. 1ed. Belo Horizonte:
IRIS, 2019, p. 46-54.
117 “For purposes of this Law, the data used for formation of the behavioral profile of
a given natural person, if identified, may also be deemed personal data.” Translated
by: BELLI, Luca; LORENZON, Laila; FERGUS, Luã. The Brazilian General Data Protec-
tion Law (LGPD): Unofficial English Version. CyberBRICS Project at FGV Law School.
Available at: https://cyberbrics.info/brazilian-general-data-protection-law-lgpd-unof-
ficial-english-version/. Acessed 25 January 2020.
118 “The data subjects are entitled to request a review, by a natural person, of deci-
behavioral prediction119 and its use in automated decision-making that
can bring consequences to the interests and the rights of the subject120.
Likewise, the GDPR defines profiling, in its Article 4(4) with
emphasis on its uses for behavioral prediction and other socially
relevant characteristics of the subject:
sions made only based on the automatized processing of personal data that affects
their interests, including of decisions designed to define their personal, consumption
and credit profile or the aspects of their personality” Translated by: BELLI, Luca; LO-
RENZON, Laila; FERGUS, Luã. The Brazilian General Data Protection Law (LGPD): Unof-
ficial English Version. CyberBRICS Project at FGV Law School. Available at: https://cy-
berbrics.info/brazilian-general-data-protection-law-lgpd-unofficial-english-version/.
Acessed 25 January 2020.
119 ZANATTA, Rafael. Perfilização, Discriminação e Direitos: do Código de Defesa do Con-
sumidor à Lei Geral de Proteção de Dados Pessoais. Available at: http://rgdoi.net/10.13140/
RG.2.2.33647.28328. Acessed 19 November, 2019
120 BIONI, Bruno. Proteção de Dados Pessoais: a função e os limites do consentimento. Rio
de Janeiro: Forense, 2019.
121 ARTICLE 29WORKING PARTY (A29WP). Guidelines on Automated individual de-
cision-making and Profiling for the purposes of Regulation 2016/679. (WP251rev.01).
Brussels, 2018. Available at: http://ec.europa.eu/newsroom/article29/item-detail.
cfm?item_id=612053. Acessed 21 May, 2018.
These definitions, as said, are not abstract and purely
technical definitions, as they reflect and prioritize the ethical and legal
features of the technology use.
Despite this attempt to highlight the ethical and legally
relevant aspects, the referred legislation leaves out some of the more
worrying aspects. Antoinette Rouvroy points out that analyzing
the profiling activity as solely processing personal data to generate
information about a specific person brings severe limitations. As the
author claims:
122 ROUVROY, Antoinette. “Of Data and Men”. Fundamental Rights and Freedoms in
a World of Big Data. Council of Europe, Directorate General of Human Rights and Rule of
Law. vol. T-PD-BUR (2015)09REV, 2016, 2016, p. 33.
123 MANTELERO, Alessandro. Personal Data for Decisional Purposes in the Age of
Analyitics: From an individual to a collective dimension of data protection. Computer
Law & Security Review, v. 32, n. 2, p. 238-255, 2016.
124 HILDEBRANDT, Mireille. Defining Profiling: A New Type of Knowledge? In: HIL-
DEBRANDT, M.; GUTWIRTH, S. (Eds.) Profiling the European Citizen: Cross-Disciplinary
Perspectives. Cham/SWI: Springer Science, 2008, p. 17-44.
125 About machine learning, see SEAVER, Nick. Knowing Algorithms. In: VERSETI,
Antoinette Rouvroy points to a difference between individual
profiling, which follows a traditional categorization logic, and
clustering. She claims that in the traditional logic common features
of a group are identified and subsumed to pre-existent categories126.
That is to say, the categories exist as significant social phenomena,
like ethnic, religious, or national groups. To put individuals in these
categories makes them see themselves as belonging in a way that they
could create relationships of interdependence or solidarity. Differently,
on clustering or group profiling the goal is to use the data processing
to create or to find out new categories that do not yet exist,“which
are imperceptible (because they emerge only as the process unfolds), and
most often without any possibility of [the subject] being aware of what is
happening or recognizing themselves127”.
Clustering groups, then, do not present the social relations
existent in the groups formed under traditional categorization, creating
new, sometimes imperceptible, groups that could be vulnerable to bias
or discrimination, making it harder to individuals to even know their
rights are being violated. Mittelstadt identifies these groups constituted
by means of clustering as ad hoc groups, since they are formed from
a specific and volatile grouping process with a specific goal. The data
subject might not know this group exists, even though he is part of it
and could suffer its effects as a result. Under a collective protection
Janet, RIBES, David (Eds.) digitalSTS: A Field Guide Study for Science & Technology Stud-
ies. Princeton & Oxford: Princeton University Press, p. 412-422. Available at: https://
digitalsts.net/wp-content/uploads/2019/11/26_digitalSTS_Knowing-Algorithms.pdf.
Acessed 20 jan 2020. and VEALE, Michael. Governing Machine Learning that Matters.
2019. Doctoral thesis (Ph.D), 352 p., UCL (University College of London). Available at:
https://discovery.ucl.ac.uk/id/eprint/10078626/. Acessed October 18, 2019.
126 ROUVROY, Antoinette. “Of Data and Men”. Fundamental Rights and Freedoms in
a World of Big Data. Council of Europe, Directorate General of Human Rights and Rule of
Law. vol. T-PD-BUR (2015)09REV, 2016, 2016, p. 1-37.
127 ROUVROY, Antoinette. “Of Data and Men”. Fundamental Rights and Freedoms in
a World of Big Data. Council of Europe, Directorate General of Human Rights and Rule of
Law. vol. T-PD-BUR (2015)09REV, 2016, 2016, p. 28.
perspective, as will be discussed below, there are doubts concerning
how would the interests of this ad hoc groups be represented128.
Thus, the constructed profile is not an exact representation of
that person, but an attempt to predict their behavior for a specific goal
made from a massive aggregation of data. Watcher claims that “what
matters is whether the user behaves similarly enough to the assumed group
to be treated as a member of the group129”.
Hildebrandt cites a hypothetical example of a group of
people that are left-hand and have blue eyes that has a correlation
of the likelihood of developing a specific disease130. From this, a
group emerges of people with that characteristics and the increased
likelihood of developing such a disease could be attributed to them.
This reveals that, even without ever having consented to the collection
of personal data or its processing, the risk classification based on
common attributes can indirectly affect subjects who cannot object.
Therefore, as not only personal data from a single subject is used to
build a model, a limbo is created between the possibility of exercising
individual rights to control a profile and the aggregated mass data used
to form that profile.
128 MITTELSTADT, Brent. From Individual to Group Privacy in Big Data Analytics,
Philosophy & Technology, n. 04, 2017, p. 475–494.
129 WACHTER, Sandra. “Affinity Profiling and Discrimination by Association in Online
Behavioural Advertising”, 2019, p. 13. Available at: https://ssrn.com/abstract=3388639
Acessed May 25, 2019.
130 HILDEBRANDT, Mireille. Defining Profiling: A New Type of Knowledge? In: HIL-
DEBRANDT, M.; GUTWIRTH, S. (Eds.) Profiling the European Citizen: Cross-Disciplinary
Perspectives. Cham/SWI: Springer Science, 2008, p. 17-44.
consequences on a supra-individual scale. Now, we will show other
threats posed by the technique regarding its discriminatory effects131.
Initially, it is important to note that correlations established
by machine-learning techniques132 cannot be surely anticipated
and, on the other hand, they do not explain the reasons behind
the correlations it discovers133. As, according to Hildebrandt, the
correlations discovered by data mining and profiling processes do not
establish the causes or the reasons for its existence or perpetuation.
The algorithm works by finding the correlations existent between the
variables and, when it is used to automated decision making, finding
which is the best course of action considering the probability of these
correlations to hold134.
This classificatory nature of profiling and mining technologies
allows us to raise a concern associated with them: the correlations and
division of groups based on data related to ethical and legally relevant
characteristics, such as skin color, ethnic origin, naturalness, gender,
sexual orientation, socioeconomic condition, health condition,
131 HILDEBRANDT, Mireille. Defining Profiling: A New Type of Knowledge? In: HIL-
DEBRANDT, M.; GUTWIRTH, S. (Eds.) Profiling the European Citizen: Cross-Disciplinary
Perspectives. Cham/SWI: Springer Science, 2008, p. 17-44.; GOODMAN, Bryce.; FLAX-
MAN, Seth. European Union Regulations on Algorithmic Decision-Making and a “Right
to Explanation”. AI Magazine, v. 38, n. 3, 2017, p. 50–57.; SCHERMER, Bart. Risks of
Profiling and the Limits of Data Protection Law. In: CUSTERS, Bart.; CALDERS, Toon.;
SCHERMER, Bart.; ZARSKY, Tal. (Eds.) Discrimination and Privacy in the Information
Society: Data Mining and Profiling in Large Databases. Berlin: Springer-Verlag, 2013, p.
137-152
132 About machine learning, see SEAVER, Nick. Knowing Algorithms. In: VERSETI,
Janet, RIBES, David (Eds.) digitalSTS: A Field Guide Study for Science & Technology Stud-
ies. Princeton & Oxford: Princeton University Press, p. 412-422. Available at: https://
digitalsts.net/wp-content/uploads/2019/11/26_digitalSTS_Knowing-Algorithms.pdf.
Acessed 20 jan 2020. and VEALE, Michael. Governing Machine Learning that Matters.
2019. Doctoral thesis (Ph. D), 352 p., UCL (University College of London). Available at:
https://discovery.ucl.ac.uk/id/eprint/10078626/. Acessed October 18, 2019.
133 SCHERMER, Bart. The limits of privacy in automated profiling and data mining.
Computer law & security review. n 27, 2011, p. 45-52.
134 HILDEBRANDT, Mireille. Defining Profiling: A New Type of Knowledge? In: HIL-
DEBRANDT, M.; GUTWIRTH, S. (Eds.) Profiling the European Citizen: Cross-Disciplinary
Perspectives. Cham/SWI: Springer Science, 2008, p. 17-44.
religious, political or philosophical belief, among others. Profiling
based on these characteristics has enormous potential to deepen
existing discriminatory issues or even to create new discriminatory
practices.
A hastened solution to this problem might suggest that this
data should be excluded from data processing, as determined by the
GDPR in its Article 22, item 4, prohibiting automated decision making
based on special category data. However, this solution brings other
problems. First of them is the several exceptions made to the main rule,
allowing the use of these sensitive data in two hypotheses: when the
data subject gives his informed consent and when the data processing
is necessary for reasons of substantial public interest (arts. 9 (2) (a) and
(g)). This consent could be ineffective as a protective measure and its
centrality is a shortcoming of this norm. (For more on the inadequacy
of consent as a protective measure, see topic 3.1).
Other inadequacies of this protection based on the special
category data are due to its definition, given by art. 9 of the GDPR,
and includes, for example, data that reveals the ethnic or racial
origin, political and religious opinions, biometric data, health data,
sexual orientation, among others. However, other data that identifies
vulnerable groups are not included, such as gender, financial income,
place of residence, employment.
Another problem related to profiling and clustering is
identified by Sandra Wachter, as the author points out the possibility
of the use of proxy data. This kind of data does not directly link a data
subject to a protected category (e.g. ethnicity), but only identifies
an affinity of the subject with a particular group. It allows the data
controller to try and escape the obligations related to processing
special category data. This is the case with the targeting of advertising
allowed by Facebook, in which advertisers could exclude certain
groups with “ethnic affinities” from receiving their ads135.
135 ANGWIN, Julia, PARRIS JR., Terry. Facebook Lets Advertisers Exclude Users by Race.
This discriminatory effect can appear in any stage of the
technique development, as the algorithm design, the AI feeding and
learning, or the current use of the algorithm in a social medium,
where discriminatory practices are common, generating unwanted
feedback136.
However, the actual existence of unfair discrimination that
violates the legal order in the process of profiling is not easy to verify.
The technical difficulties are many, as well as what is considered a
fair or unfair result137. Although theoretically possible, the practical
evaluation of the results and decision-making processes of algorithms,
especially those based on machine learning, can be extremely complex
even for specialists in the technology in question138. Schermer also
argues that removing sensitive data from automated processing bases
may mean excluding means to verify, after processing, whether an
algorithm has made a discriminatory decision 139.
This practical impossibility of obtaining a detailed technical
evaluation of the processes limits the possibilities of evaluation, since
“we often are bound to assess only the (un)fairness of its treatments from
how it behaves with regard to actual individuals140”, which certainly leads,
sometimes, to incorrect evaluations.
143 HILDEBRANDT, Mireille. Defining Profiling: A New Type of Knowledge? In: HIL-
DEBRANDT, M.; GUTWIRTH, S. (Eds.) Profiling the European Citizen: Cross-Disciplinary
Perspectives. Cham/SWI: Springer Science, 2008, p. 17-44.
144 Translated by: BELLI, Luca; LORENZON, Laila; FERGUS, Luã. The Brazilian General
Data Protection Law (LGPD): Unofficial English Version. CyberBRICS Project at FGV Law
School. Available at: https://cyberbrics.info/brazilian-general-data-protection-law-lg-
pd-unofficial-english-version/. Acessed 25 January 2020.
The GDPR, by its turn, mentions automated decision-making
several times throughout its text145, regulating them and its required
safeguards in more detail in its Article 22. Initially, this Article
guarantees data subjects a right not to be submitted to decisions
taken solely on the basis of automated processing. For now, it should
be noted that this general prohibition also applies to profiling, even
though profiling does not require a fully automated processing, as
mentioned above (2.1).
From this perspective, the Brazilian law is stricter with data
subjects’ rights against automated decision-making, with the right to
review only being applicable when the processing is fully automated.
Another point to note is that both the GDPR and the LGPD
count on the oversight and actions of data protection authorities,
recognizing that expecting individuals to overlook and exercise their
rights is insufficient146. They act not only as a law enforcement entity,
but also have a role in establishing compliance and good practices
guides.
However, with regards to automated decision-making, it is
still not entirely clear how the regulations will be enforced, either
reinforcing the individual rights provided to data subjects or by
demanding previous impact assessments and certifications for data
controllers.
A promising take on the latter is proposed by Kaminski and
Malgieri, elaborating on how the GDPR demands an “algorithmic
impact assessment” and a multi-layered explanation approach147.
154 KUNER, Cristopher. et. al. Editorial: Machine Learning with Personal Data: Is Data
Protection Law Smart Enough to Meet the Challenge? International Data Privacy Law.
Vol. 7, n. 1, 2017, p. 1-2.
155 KUNER, Cristopher. et. al. Editorial: Machine Learning with Personal Data: Is Data
Protection Law Smart Enough to Meet the Challenge? International Data Privacy Law.
Vol. 7, n. 1, 2017, p. 2.
156 RUBINSTEIN, Ira. Big Data: The End of Privacy or a New Beginning?, International
Data Privacy Law, v. 3, n. 2, May 2013, p. 74–87.
157 SCHERMER, Bart. The limits of privacy in automated profiling and data mining.
Computer law & security review. n 27, 2011, p. 45-52.
argued that relying only on the consent of the subject can weaken his
legal protection.
In addition to this problem regarding the exceptions
provided in Article 22(2), there are impasses about the construction
and delimitation of the GDPR, in particular, the definition of the
expressions “based solely on automated processing” and “which
produces legal effects concerning him or her or similarly significantly
affects him or her” and its vagueness. The same problem, as we shall
see, is present in the Brazilian General Data Protection Law, as Article
20 conditions the right to revision to “decisions made only based on
the automated processing of personal data that affects their [data subject]
interest158”.
On this matter, A29WP adopted the position that human
participation in the decision-making process needs to be significant,
with authority and competence to influence the result so that it is
not considered fully automated. The “human in the loop” cannot just
continuously endorse the result presented by the algorithm159.
On the other hand, restricting the scope of protection to
those based only on automated processing can end up rendering the
protection meaningless. Veale and Edwards point out that, among
the automated decision systems used today, “few do so without
what is often described as a “human in the loop”- in other words, they
act as decision support systems, rather than autonomously making
decisions.160”. Furthermore, Antoinette Rouvroy questions whether
158 Translated by: BELLI, Luca; LORENZON, Laila; FERGUS, Luã. The Brazilian General
Data Protection Law (LGPD): Unofficial English Version. CyberBRICS Project at FGV Law
School. Available at: https://cyberbrics.info/brazilian-general-data-protection-law-lg-
pd-unofficial-english-version/. Acessed 25 January 2020.
159 ARTICLE 29WORKING PARTY (A29WP). Guidelines on Automated individual de-
cision-making and Profiling for the purposes of Regulation 2016/679. (WP251rev.01).
Brussels, 2018. Available at: http://ec.europa.eu/newsroom/article29/item-detail.
cfm?item_id=612053. Acessed 21 May, 2018.
160 VEALE, Michael.; EDWARDS, Lilian. Clarity, surprises, and further questions in
the Article 29Working Party draft guidance on automated decision-making and profil-
ing. Computer law & security review. n. 34, 2018, p. 400.
even in recommendation systems, where the final decision is up to
a competent human to oppose the recommendation, there would be
a strong prescriptive force in the system algorithmic to consider it
relevant, because to disregard a recommendation the human operator
will have to use arguments that are as quantitatively measurable as
the algorithmic predictions. In this case, all space for some personal
conception of justice and fairness or even uncertainty is eliminated in
favor of risk-averse predictive measurement161.
Another problem pointed out in Article 22 of the GDPR
concerns the requirement that the automated decision has legal or
“significantly similar” effects. A29WP argues that this definition would
include any decision that “significantly influence the circumstances,
behavior or choices of the individuals concerned” or that generates
“exclusion or discrimination”162. It is important, at this point, to be aware
of the fact that the word used is “influence” and not “cause”, being
indicated by Veale and Edwards that this would even include situations
where the behavior of the data subject is not directly caused by the
decision, but merely influenced by it, as in the possibility of profiling
changing the way the data subject choice options are arranged or
generating differentiated prices, influencing their decision163.
A third point of uncertainty regarding the scope of the right
provided for in Article 22 of the GDPR concerns the definition of the
persons to whom the significant effects relate. These must be of direct
concern to the individual claiming the right or - as we have previously
161 ROUVROY, Antoinette. “Of Data and Men”. Fundamental Rights and Freedoms in
a World of Big Data. Council of Europe, Directorate General of Human Rights and Rule of
Law. vol. T-PD-BUR(2015)09REV, 2016, p. 1-37, 2016.
162 ARTICLE 29WORKING PARTY (A29WP). Guidelines on Automated individual de-
cision-making and Profiling for the purposes of Regulation 2016/679. (WP251rev.01).
Brussels, 2018. Available at: http://ec.europa.eu/newsroom/article29/item-detail.
cfm?item_id=612053. Acessed 21 May, 2018, p. 10.
163 VEALE, Michael.; EDWARDS, Lilian. Clarity, surprises, and further questions in
the Article 29 Working Party draft guidance on automated decision-making and pro-
filing. Computer law & security review. n. 34, 2018, p. 398-404.
highlighted by the possibility of collective effects from the profiling
process - they can be effects that affect a community, even if it has not
been generated directly with the plaintiff’s data. An example can be
instructive, in this case:
164 VEALE, Michael.; EDWARDS, Lilian. Clarity, surprises, and further questions in
the Article 29 Working Party draft guidance on automated decision-making and pro-
filing. Computer law & security review. n. 34, 2018, p. 402.
165 SCHERMER, Bart. Risks of Profiling and the Limits of Data Protection Law. In:
CUSTERS, Bart.; CALDERS, Toon.; SCHERMER, Bart.; ZARSKY, Tal. (Eds.) Discrimina-
tion and Privacy in the Information Society: Data Mining and Profiling in Large Databases.
Berlin: Springer-Verlag, 2013, p. 137-152.
deal with problems generated on a collective scale, suggesting a multi-
stakeholder participation approach to risk assessment, supervised
by data protection authorities. This risk assessment, according to the
author, must be carried out by controllers who intend to work with
big data analysis before engaging in data processing activities166. Thus,
restricting the effectiveness of Article 22 of the GDPR to only isolated
cases of individual problems can, as in the case of the other problems
mentioned, hinder the effectiveness of the provision.
It is necessary to mention the three safeguards listed in Article
22, item 3: the right to obtain human intervention in the decision-
making process, the subject’s right to express his / her point of view
and the right to challenge the decision. The list is not exhaustive, due
to the expression used that “at least” these three safeguards must be
present in the case of the exceptions of the letters ‘a’ and ‘c’ of item
2 of the Article, which concern the signing of a contract or granting
consent. It is possible to say, therefore, that the data subject cannot
consent or negotiate waiver or exclusion of these three safeguards, as
they are the result of legal determination. These safeguards are also
subject to criticism and questioning, as pointed out by KUNER et al.167
and Roig168.
Regarding the right to obtain human intervention in the
decision-making process, Kuner and others summarize the difficulties
that the person responsible for this intervention would have, arguing
that “it may not be feasible for a human to conduct a meaningful review of
a process that may have involved third-party data and algorithms (which
may contain trade secrets), pre-learned models, or inherently opaque
166 MANTELERO, Alessandro. Personal Data for Decisional Purposes in the Age of
Analyitics: From an individual to a collective dimension of data protection. Computer
Law & Security Review, v. 32, n. 2, p. 238-255, 2016.
167 KUNER, Cristopher. et. al. Editorial: Machine Learning with Personal Data: Is Data
Protection Law Smart Enough to Meet the Challenge? International Data Privacy Law.
Vol. 7, n. 1, 2017, p. 1-2.
168 ROIG, Antoni. Safeguards for the right not to be subject to a decision based solely
on automated processing (Article 22 GDPR). European Journal of Law and Technology.
V. 8, n. 3, 2017.
machine learning techniques169”. Furthermore, it is possible that for this
intervention to be done in a meaningful way, it needs to be made by
a professional specialized in the evaluation of statistical correlations
developed by the algorithm170.
Finally, on the data subject’s right to express his views, Roig
again argues that it would be difficult to challenge an automatic
decision without a clear explanation of how the result was achieved.
He states that “to challenge such an automatic data-based decision, only
a multidisciplinary team with data analysts will be able to detect false
positives and discriminations 171”.
These excessive uncertainties and speculations about Article
22 and its effectiveness make clear there are some weaknesses in
the approach to the problem when it is taken from the perspective
of individuals rights. This impression is especially strong when we
consider that human participation tends to be reduced and difficult
to overlap with the automated process. Also, what constitutes a legal
or “significantly similar” effect is a point of contention, as well as the
scope of which decisions fall within Article 22.
Given these questions raised by the GDPR approach, we will
assess what novelties the Brazilian General Data Protection Law has
brought in relation to European regulation and which we believe
deserve some attention in further research and studies on data
processing regulation and automated decision making.
169 KUNER, Cristopher. et. al. Editorial: Machine Learning with Personal Data: Is Data
Protection Law Smart Enough to Meet the Challenge? International Data Privacy Law.
Vol. 7, n. 1, 2017, p. 2.
170 ROIG, Antoni. Safeguards for the right not to be subject to a decision based solely
on automated processing (Article 22 GDPR). European Journal of Law and Technology.
V. 8, n. 3, 2017.
171 ROIG, Antoni. Safeguards for the right not to be subject to a decision based solely
on automated processing (Article 22 GDPR). European Journal of Law and Technology.
V. 8, n. 3, 2017, p. 6.
2.2. Perspectives and Alternatives offered by the Brazilian
General Data Protection Law
In the final part of the text, parting from the initial evaluations
regarding the legal protections against automated decisions at GDPR
and the inadequacy of a system geared to individual protection, we will
go into more detail about how the general Brazilian data protection
law can present advances in the direction of a collective protective
system within the scope of automated decision-making. The Brazilian
General Data Protection Law, as highlighted in the introduction, has
a more principled and less detailed character than the GDPR. Thus, it
will be necessary to analyze how its provisions will be interpreted by
courts and the Data Protection Authority in the coming years. Here
we will make some initial notes on the final text of the law and how
Brazilian authors have been interpreting it. The argument to be made
is not that the LGPD presents a more robust protection system than the
GDPR, but that, due to influences and particularities of other areas of
Brazilian law, the Brazilian General Data Protection Law brings some
solutions that deserve attention.
First, it is important to emphasize that there is not, as in art.
22 of the GDPR, a general right not to be subject to fully automated
decisions, including profiling, which, despite the problems mentioned
in the previous topic, gives GDPR a stronger protective character.
However, the LGPD brings along its art. 20 data subjects’ rights that
fall specifically on automated treatments.
172 Translated by: BELLI, Luca; LORENZON, Laila; FERGUS, Luã. The Brazilian General
Data Protection Law (LGPD): Unofficial English Version. CyberBRICS Project at FGV Law
School. Available at: https://cyberbrics.info/brazilian-general-data-protection-law-lg-
pd-unofficial-english-version/. Acessed 25 January 2020
aspects of their personality may cause her any damage, the subject, or
organizations aimed at protecting the rights of vulnerable groups, as
will be argued below, will have the possibility to anticipate themselves
before this damage has occurred to request the profiling review. This
right, in the original text of the LGPD, was even more robustly envisaged,
as a right to human review, until Law 13.853/19 changed some of the
original provisions of the LGPD and, through the presidential veto, the
review no longer requires the involvement of a human.
In addition, Article 20, Paragraph 1 establishes the data
subject’s right to information, according to which the controller must
“provide, upon request, clear and adequate information on the criteria and
procedures used for the automatized decision, observing the business and
industrial secrets173”. Can this right, as established, be understood as a
right to explanation? In order to take a position on this, it is necessary
to open a parenthesis to briefly seek to understand which arguments
stand out in the European debate.
Goodman and Flaxman were one of the first to strongly support
the existence of a right to explanation in GDPR, despite not further
exploring the claim. The basis for the argument of the authors would
be GDPR Articles 13 and 14 requirements of “meaningful information
about the logic involved” as an additional safeguard established by art.
22, applicable to profiling practices174.
In an opposite direction, Wachter, Mittelstadt and Floridi
argue in favor of the non-existence of this right to explanation. The
authors argue that although art. 22 (3) of GDPR has provided safeguards
for the data subject, if he/she is subjected to an automated decision,
the right to explanation is not among them. The only express provision
of the said “right to an explanation” is at Recital 71, which is not legally
173 Translated by: BELLI, Luca; LORENZON, Laila; FERGUS, Luã. The Brazilian General
Data Protection Law (LGPD): Unofficial English Version. CyberBRICS Project at FGV Law
School. Available at: https://cyberbrics.info/brazilian-general-data-protection-law-lg-
pd-unofficial-english-version/. Acessed 25 January 2020
174 GOODMAN, Bryce.; FLAXMAN, Seth. European Union Regulations on Algorithmic
Decision-Making and a “Right to Explanation”. AI Magazine, v. 38, n. 3, 2017, p. 50–57.
binding. Futhermore, the authors argue that arts. 13-14 establish only
an ex ante duty to notify the subject about the “system functionality”.
This, according to the authors, would mean that they cannot be used to
claim an ex post right to explain a specific decision. However, Wachter
and others admit that within the limits of the right of access provided
for in art. 15 (1) (h) it is possible that the jurisprudence establishes a
right to explain specific decisions175.
A third way is adopted by Selbst and Powles, when arguing
in favor of an extension of the concept of the right to explanation,
without being attached to the moment when it can be exercised, that
is, whether it may be required by ex ante or ex post, and whether it must
be of a specific decision or concerning the whole decision-making
system. The authors argue that, if in fact guaranteed, even a right to
explanation that focuses only on the logic involved would allow the
subject to infer how this applies to a specific decision. Therefore, for
Selbst and Powles, the great concern that must be had in the realization
of a right to explanation is whether it guarantees the data subject the
means to understand the logic of the automated decision system to
which they were submitted, and thereby exercise their rights176.
Given these arguments, we defend that in Article 20,
Paragraph 1 of the LGPD the theoretical and legal bases for a right to
explanation are present. In the same sense as that argued by Selbst
and Powles, it is not necessary for the law to establish rigid procedures
and parameters for the fulfillment of this norm, provided that the data
subjects, through the exercise of this right, actually have access and
can understand the logic involved in the decision , thus enabling the
exercise of other rights – either data subject’s rights provided for by
175 WACHTER, Sandra MITTELSTADT, Brent, FLORIDI, Luciano. Why a Right to Ex-
planation of Automated Decision-Making Does Not Exist in the General Data Protec-
tion Regulation, International Data Privacy Law, v. 7, n. 2, May 2017, p. 76–99.
176 SELBST, Andrew D.; POWELS, Julia. Meaningful Information and the Right to Ex-
planation. International Data Privacy Law. Oxford: Oxford University Press. v. 07, n. 04,
Nov. 2017, p. 233-242.
the LGPD itself or broader fundamental rights brought by the entire
legal system.
Renato Leite Monteiro further maintains that the principle
of transparency and the consumer protection microsystem in credit
relations177 already created a right to explanation in this context. The
Brazilian General Data Protection Law, then, for the author, reinforces
and expands this right to any type of data processing activity178.
However, the same debate regarding the explanation of
specific decision ex post, versus an ex ante explanation of general
functioning is possible for Brazilian law. The law still safeguards
commercial and industrial secrets; however, it does not define its
limits and must be observed on a case-by-case basis.
Article 20 Paragraph 2 has a provision of great importance
and which cannot be neglected, regarding the measure and
enforcement, which goes beyond the simple safeguard of obtaining
human intervention in the decision-making process. The provision
foresees the possibility that, if the information is denied on the basis
of commercial and industrial secret, there may be interference by
the Brazilian National Data Protection Authority to verify, through
an audit, the presence of discriminatory aspects in the automated
decision-making processes. Such a possibility may serve as a good
reason for companies to provide the necessary information, but it is
difficult to see how it could be realized both by the lack of government
expertise and the difficulties generated by the decentralization and
enormous international power of the internet giants, when they are
the targeted controller.
Another relevant point brought by the LGPD can be observed
with the combination of Article 20, Paragraph 1 and Article 12,
177 The Brazilian Consumer Defense Code (Law 8.079/1990) and the Positive Regis-
tration Law (Law 12.414/2011) provides safeguards and protections to consumers in
credit relations.
178 MONTEIRO, Renato Leite. Existe um direito à explicação na Lei Geral de Proteção
de Dados Pessoais?, Instituto Igarapé, Artigo Estratégico nº 39, Dezembro de 2018.
Paragraph 2. The latter establishes that anonymized data (which, as a
rule, are not considered personal data), will be considered as personal
if used “for formation of the behavioral profile of a given natural person,
if identified179”. This Article can still be the subject of controversy,
because it conditions its incidence to a very specific and difficult to
verify the situation, since the behavioral profile does not necessarily
need to identify a person so that their interests are affected180.
For this reason, Bruno Bioni argues that the identification of
a given natural person concerns not the identification of them in a
database in an abstract way, but their identification as a person who
suffered the consequences of that data processing activity. Thus,
according to the author, Brazilian law takes an approach in which
“the focus is not on the data, but its use – for the formation of behavioral
profiles – and its consequent repercussion in the individual’s sphere181”. For
this same reason, these anonymized data used for the formation of
the behavioral profile should be considered as personal data by the
controller when explaining an automated decision, further expanding
the obligations that fall on the rights provided for in Article 20.
So far it has been argued that protections against violations
caused by automated decisions become stronger by incorporating
a collective character. However, it should also be noted that some
individual protections, notably the right to explanation, can fulfill
another important role. The request for an explanation of the decision
and the algorithmic accountability are important not only to prevent
179 Translated by: BELLI, Luca; LORENZON, Laila; FERGUS, Luã. The Brazilian General
Data Protection Law (LGPD): Unofficial English Version. CyberBRICS Project at FGV Law
School. Available at: https://cyberbrics.info/brazilian-general-data-protection-law-lg-
pd-unofficial-english-version/. Acessed 25 January 2020
180 ROUVROY, Antoinette. “Of Data and Men”. Fundamental Rights and Freedoms in
a World of Big Data. Council of Europe, Directorate General of Human Rights and Rule of
Law. vol. T-PD-BUR(2015)09REV, 2016, p. 1-37, 2016.
181 BIONI, Bruno. Proteção de Dados Pessoais: a função e os limites do consentimento. Rio
de Janeiro: Forense, 2019, p. 80. Translated by the autors from the original: “o foco não
está no dado, mas no seu uso – para a formação de perfis comportamentais – e sua
consequente repercussão na esfera do indivíduo”.
discrimination and errors. What is perhaps more important in these
cases, is the fact that when seeking an explanation for the decision,
the rules governing that decision-making process become explicit.
That is, the variables considered, the objective of the categorization
performed and why certain decision-making process is legitimate,
opening the possibility to question the parameters adopted, and, in a
broader sense, the possibility of critique182.
However, as another important legal tool to combat the
possible shortcomings of the exercise of individual rights for problems
on a collective scale, as demonstrated at the beginning of this work,
the text of Article 22 of the LGPD is promising:
182 Antoinette Rouvroy defines critique as “a practice that suspends judgment and an
opportunity to practice new values, precisely on the basis of that suspension. In this
perspective, critique targets the construction of a field of occlusive categories them-
selves rather than on the subsumption of a particular case under a pre-constituted
category”. The author argues that data-mining and profiling practices makes critique
difficult. For further development of the argument, see ROUVROY, Antoinette. The
end(s) of critique: data behaviourism versus due process. In: HILDEBRANDT, Mireille;
DE VRIES, Katja (eds.). Privacy, Due Process and the Computational Turn: the philosophy
of law meets the philosophy of technology. New York: Routledge, 2013, p. 143-167.
183 Translated by: BELLI, Luca; LORENZON, Laila; FERGUS, Luã. The Brazilian General
Data Protection Law (LGPD): Unofficial English Version. CyberBRICS Project at FGV Law
School. Available at: https://cyberbrics.info/brazilian-general-data-protection-law-lg-
pd-unofficial-english-version/. Acessed 25 January 2020
and reducing the power asymmetry that exists between large data-
controlling corporations and their consumers.
This possibility of collective action by interested parties,
together with the possibility of taking preventive actions without
the occurrence of rights having been harmed, as discussed in the
comments to Article 20, Paragraph 1, would be a much more interesting
option to deal with, for example, with the case of group discrimination
mentioned in the previous section, referring to the advert targeted to
those with “black-sounding” first names, suggesting that the aid of a
criminal defense lawyer may be needed. If by GDPR those who have
not had their data processed or even feel threatened to have their
rights harmed would have to argue in favor of an implied collective
protection, to act in defense of their interests, by the LGPD, any civil
society organization that legitimately represents the interests of the
harmed group, according to Brazilian law, or even a group of subjects
who feel collectively harmed, could act preventively and collectively
so that the interests of the group were respected as such, avoiding this
biased and discriminatory targeting.
Another point of the LGPD to be opposed to the GDPR
concerns the possibility of individuals, especially consumers, to prove
the discriminatory potential of the processing given to the data or even
to prove concrete damage suffered. Antoinette Rouvroy, in a study for
the Consultative Committee of the Convention for the Protection of
Individuals with Regard to Automatic Processing of Personal Data –
an advisory committee of Convention 108 of the European Council –
argues that the reversion of the burden of proof in cases where there is
suspected discrimination generated, even if indirectly, by automated
data processing activities in the decision-making process, would be an
important measure to guarantee the fundamental rights and guarantees
of the subjects. Thus, the author suggests that the data controller is the
one who should prove that this automated treatment did not generate
discriminatory effects. We argue here that the Brazilian General Data
Protection Law allows this reversion of the burden of proof, at least in
legal proceedings, according to Article 42 Paragraph 2, in parameters
similar to those defended by the study mentioned above:
184 Translated by: BELLI, Luca; LORENZON, Laila; FERGUS, Luã. The Brazilian General
Data Protection Law (LGPD): Unofficial English Version. CyberBRICS Project at FGV Law
School. Available at: https://cyberbrics.info/brazilian-general-data-protection-law-lg-
pd-unofficial-english-version/. Acessed 25 January 2020.
Finally, the principles brought by the LGPD must be
emphasized. As mentioned before, the LGPD has a much more
principled character than the GDPR. However, this does not
necessarily mean less protection for data subjects, but rather that
more interpretative and regulatory work is needed to understand and
define the obligations that fall on processing activities.
In addition to privacy protection, Brazilian law lists privacy,
informational self-determination, the free development of personality
and human rights as some of its foundations185. In Article 6, 10 general
principles are provided, among them the principle of transparency186
and non-discrimination187 stands out. The latter, contrary to the
European regulation, is expressly provided for in Brazilian law.
Therefore, when employing automated decision-making and
profiling techniques, the controller must take steps to ensure that all of
these principles are respected. We defend that there are, accordingly,
prior obligations to ensure that the techniques employed are not
discriminatory, to ensure that the data subject can be informed and
understand the nature of the treatment carried out on him, as well as
185 Article 2: The regulation of personal data protection is grounded on: I.– respect
for privacy; II. – informational self-determination; VII. – human rights, free develop-
ment of personality, dignity and exercise of citizenship by the individuals. Translated
by: BELLI, Luca; LORENZON, Laila; FERGUS, Luã. The Brazilian General Data Protec-
tion Law (LGPD): Unofficial English Version. CyberBRICS Project at FGV Law School.
Available at: https://cyberbrics.info/brazilian-general-data-protection-law-lgpd-unof-
ficial-english-version/. Acessed 25 January 2020.
186 Article 6 VI: transparency: guarantee, to the data subjects, of clear, accurate and
easily accessible information on the processing and the respective processing agents,
subject to business and industrial secrets. Translated by: BELLI, Luca; LORENZON,
Laila; FERGUS, Luã. The Brazilian General Data Protection Law (LGPD): Unofficial En-
glish Version. CyberBRICS Project at FGV Law School. Available at: https://cyberbrics.
info/brazilian-general-data-protection-law-lgpd-unofficial-english-version/. Acessed
25 January 2020.
187 Art 6º IX: non-discrimination: impossibility of processing data for discriminatory,
unlawful or abusive purposes. Translated by: BELLI, Luca; LORENZON, Laila; FER-
GUS, Luã. The Brazilian General Data Protection Law (LGPD): Unofficial English Version.
CyberBRICS Project at FGV Law School. Available at: https://cyberbrics.info/brazil-
ian-general-data-protection-law-lgpd-unofficial-english-version/. Acessed 25 January
2020.
to have the power to influence that treatment, whether by correcting
erroneous information or complementing those deemed insufficient.
In the same pace, Rafael Zanatta argues that Article 20 of the
LGPD creates a dialogical obligation between the controller and the
subject:
Conclusion
References
Introduction
filtra o que deve estar dentro ou fora do escopo de uma lei de proteção de dados pes-
soais, demarcando o terreno a ser por ela ocupado. Diferenças sutis em torno da sua
definição implicam em consequências drásticas para o alcance dessa proteção. BI-
ONI, Bruno. Xeque-Mate: o tripé de proteção de dados pessoais no xadrez das inicia-
tivas legislativas no Brasil. Grupo de Estudos em Políticas Públicas em Acesso à Infor-
mação da USP – GPOPAI, São Paulo, 2015, p. 17. Available at: https://www.academia.
edu/28752561/Xeque- Mate_o_trip%C3%A9_de_prote%C3%A7%C3%A3o_de_dados_
pessoais_no_xadrez_das_iniciativas_le gislativas_no_Brasil Access: 30 jun 2019.
195 Article 8 of the GDPR: “Everyone has the right to the protection of personal data
concerning him or her. 2. Such data must be processed fairly for specified purposes
and on the basis of the consent of the person concerned or some other legitimate
basis laid down by law. Everyone has the right of access to data which has been col-
lected concerning him or her, and the right to have it rectified. 3. Compliance with
these rules shall be subject to control by an independent authority”. Regulation
(EU) 2016/679 of the European parliament and of the council of 27 April 2016.
Available at: < https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?URI=CELEX-
:32016R0679&from=PT> Acces: 30 jun 2019.
196 “[...]. Esta equação nem de longe encerra toda a complexa problemática em torno
dessa relação, porém pode servir como ponto de partida para ilustrar como a proteção
das informações pessoais passou a encontrar guarida em nosso ordenamento jurídi-
Still on the subject of data protection in Brazil, before the
publication of the BGDPL, there were only scattered laws, providing
data protection in specific situations197. As an example, the medical
sector already had its rules regarding personal data protection, as well
as the financial sector198.
In addition to the status of data protection as a fundamental
right in the European Union and other countries - therefore its legal
importance – data protection has acquired a very significant economic
dimension. The current conditions of automated data processing have
allowed companies to extract extraordinary financial profits, either
through profiling - and thus the targeting of commercial strategies -
or through the creation of new technologies with disruptive potential.
The term “profiling” has the following definition in the GDPR:
199 Article 4 (4) of the GDPR. Regulation (EU) 2016/679 of the European parliament
and of the council of 27 April 2016. Available at: <https://eur-lex.europa.eu/legal-con-
tent/EN/TXT/PDF/?URI=CELEX:32016R0679&from=PT> Acces: 30 jun 2019.
200 Article 29 Working Party was an advisory body made up of representatives from
the data protection authority of each EU Member State. It was extinct in the date when
GDPR entered into force.
201 ARTICLE 29 WORKING PARTY (A29WP). Guidelines on Automated individual deci-
sion-making and Profiling for the purposes of Regulation 2016/679. (WP25us1rev.01).
Bruxelas, 2018. p. 7.
202 “Data capitalism is a system in which the commoditization of our data enables
an asymmetric redistribution of power that is weighted toward the actors who have
access and the capability to make sense of information. It is enacted through capi-
talism and justified by the association of networked technologies with the political
and social benefits of online community, drawing upon narratives that foreground
the social and political benefits of networked technologies”. FLYVERBOM, M., DEIB-
ERT, R., MATTEN, D. Data Capitalism: Redefining the Logics of Surveillance and Pri-
vacy. Business & Society West, 2017. Available at < http://journals.sagepub.com/doi/
abs/10.1177/0007650317718185?journalCode=basa> Access: 30 jun 2019.
203 DONEDA, Danilo. A proteção dos dados pessoais como um direito fundamental.
Espaço Jurídico, Joaçaba, v. 12, n. 2, p. 91-108, jul./dez. 2011.p. 92.
On one hand, one may observe the distrust on the
indiscriminate use of data through systems with full capacity to carry
out it’s processing on a large scale. On the other hand, in some legal
systems, data protection stands as a fundamental right and, therefore,
must be an object of wide legal protection. These factors combined
make it evident that in some countries there is an urgent need for
specific legislation on the issue of data protection subject. Therefore,
the importance of the GDPR comes to light.
Nevertheless, data processing also plays a decisive role in
the development of new technologies. This is because, as already
mentioned, artificial intelligence systems consume data, not only
for profiling but also for problem-solving and decision-making
culminating in important applications that imply large benefits for
society.
Ultimately, data processing also has the potential to bring
benefits to society in terms of convenience and development. It is from
this point of view that, in the next topic, the GDPR shall be analyzed
under magnifying lenses, to verify if its provisions can curb the
technological advance, as a side effect to the data subject protection.
204 According to Mayer-Schönberger and Cukier: “One way to think about the issue
today—and the way we do in the book—is this: big data refers to things one can do at a
large scale that cannot be done at a smaller one, to extract new insights or create new
forms of value, in ways that change markets, organizations, the relationship between
citizens and governments, and more”. MAYER-SCHÖNBERGER, Viktor; CUKIER, Ken-
neth. Big Data. 2. ed. Boston/New York: Eamon Dolan/Houghton Mifflin Harcourt,
2014.
205 DONEDA, Danilo. A proteção dos dados pessoais como um direito fundamental.
Espaço Jurídico, Joaçaba, v. 12, n. 2, p. 91-108, jul./dez. 2011, p. 92.
206 “O tratamento de dados pessoais, em particular por processos automatizados, é, no
entanto, uma atividade de risco. Risco que se concretiza na possibilidade de exposição
e utilização indevida ou abusiva de dados pessoais, na eventualidade desses dados
não serem corretos e representarem erroneamente seu titular, em sua utilização por
terceiros sem o conhecimento deste, somente para citar algumas hipóteses reais. Daí
resulta ser necessária a instituição de mecanismos que possibilitem à pessoa deter
conhecimento e controle sobre seus próprios dados – que, no fundo, são expressão
direta de sua própria personalidade. Por este motivo, a proteção de dados pessoais
é considerada em diversos ordenamentos jurídicos como um instrumento essencial
Secondly, GDPR is notable because it has innovated in its
heavy fines imposed on those who break its rules. The administrative
fines can reach up to EUR 20,000,000, or in the case of an undertaking,
up to 4% of the total worldwide annual turnover of the preceding
financial year, whichever is higher207.
In addition to the above, it is important to bear in mind that
GDPR applies exclusively to data of natural persons, excluding legal
entities from the data protection. This option reflects the European
culture of protection of citizens’ fundamental rights, according to the
first “whereas” of the regulation208.
Other important aspects of GDPR are related to its geographical
scope and the potential for worldwide replication of its guidelines in
local policies. The first aspect concerns the fact that the Regulation
applies to any responsible (or its subcontractor) for the data processing
of a natural person residing in the European Union. This means that
any individual, legal entity, government body or agency209 in the world
dealing with data of persons residing in the European Union, not
necessarily a citizen of the European Union, is subjected to GDPR.
210 Article 45(1) of the GDPR: “A transfer of personal data to a third country or an in-
ternational organisation may take place where the Commission has decided that the
third country, a territory or one or more specified sectors within that third country,
or the international organisation in question ensures an adequate level of protection.
Such a transfer shall not require any specific authorisation”. Regulation (EU) 2016/679
of the European parliament and of the council of 27 April 2016. Available at: <https://
eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=PT>
Access: 30 jun 2019.
211 However, GDPR is not exempt from criticism as to its efficiency in protecting the
data subject rights.
5. Right To Object To An Automated Individual Decision
Making
212 Articles 22 of the GDPR. Regulation (EU) 2016/679 of the European parliament and
of the council of 27 April 2016. Available at: <https://eur-lex.europa.eu/legal-content/
EN/TXT/PDF/?uri=CELEX:32016R0679&from=PT> Access: 30 jun 2019.
213 For a thorough reading on “the right of a human in the loop” Meg Leta Jones is
recommended: JONES, Meg Leta. The right to a human in the loop: Political construc-
tions of computer automation and personhood. 2 Ed. Sage Publications, Vol. 47, 2017,
p 216-239.
214 “The controller cannot avoid the Article 22 provisions by fabricating human in-
volvement. For example, if someone routinely applies automatically generated pro-
files to individuals without any actual influence on the result, this would still be a
decision based solely on automated processing. To qualify as human involvement, the
controller must ensure that any oversight of the decision is meaningful, rather than
just a token gesture. It should be carried out by someone who has the authority and
What is perceived from this rule is that it makes data
processing more costly and potentially less efficient. The onerousness
comes from the fact that keeping people with the authority to review
the decisions made by the algorithms means having human resources
expenditures. Besides, this requirement seems to be a contradiction:
automation precisely aims at optimizing the allocation of human and
financial resources and time.
Regarding efficiency, it is emphasized that, although it is
often impossible to evaluate the criteria considered by the system
for decision making, these decisions are usually more accurate and
objective than human decisions215. This is because, among other
factors, artificial intelligence systems are not affected by exclusively
personal conditions, such as mood swings. Thus, the judgment of the
system tends to be less biased and more accurate than a human’s.
Considering all of the foregoing, it is concluded that, besides
being contrary to the rationality of more objective decisions, the right
to object to an automated individual decision making places a burden
on data controllers, given the need to allocate capable staff to fully
review automated decisions. Therefore, it is assumed that this factor
may discourage the option for investments in artificial intelligence,
since these become more expensive and less useful.
competence to change the decision. As part of the analysis, they should consider all
the relevant data”. ARTICLE 29WORKING PARTY (A29WP). Guidelines on Automat-
ed individual decision-making and Profiling for the purposes of Regulation 2016/679.
(WP251rev.01). Bruxelas, 2018. p. 21.
215 “[Andrew] McAfee reviews years of studies of algorithms vs. human judgment by
various experts and concludes that we should not rely on experts anymore: ‘The prac-
tical conclusion is that we should turn many of our decisions, predictions, diagno-
ses, and judgments—both the trivial and the consequential— over to the algorithms.
There’s just no controversy any more about whether doing so will give us better re-
sults.’” https://www.forbes.com/sites/gilpress/2014/01/31/big-data-debates-machines-
vs-humans/#e292a903d040. Access: 30 jun 2019.
6. Right To Explanation Of The Logic Involved In Automatic
Personal Data Protection
216 Articles 13(2)(f); 14, (2)(g) and 15,(1)(h) of the GDPR. Regulation (EU) 2016/679 of
the European parliament and of the council of 27 April 2016. Available at: <https://
eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=PT>
Access: 30 jun 2019.
217 “The growth and complexity of machine-learning can make it challenging to un-
derstand how an automated decision-making process or profiling works”. ARTICLE
29WORKING PARTY (A29WP). Guidelines on Automated individual decision-making
and Profiling for the purposes of Regulation 2016/679. (WP251rev.01). Bruxelas, 2018.
p. 25
being impossible to see how it got the answer due to the opacity of the
box.
This was the motto of the paper entitled “European Union
Regulations on Algorithmic Decision-making and the Right to
Explanation”, published in 2016, by Goodman and Flaxman. In this
publication, the authors defended the impossibility of interpreting the
decisions obtained through machine learning algorithms:
218 “Goodman and Flaxman observed that the algorithms of past decades tended to
rely on explicit, rules- based logic for processing information—an architecture that
typically made explaining the system’s underlying decision-making relatively straight-
forward. But, crucially, the scholars noted that many of the most powerful contem-
porary algorithms instead relied on ‘models exhibiting implicit, rather than explicit,
logic usually not optimised for human-understanding’—thereby rendering the logic
underlying their decision-making an uninterpretable ‘black box’”. CASEY, B. FAR-
HANGI, A.; VOGL, R., Rethinking Explainable Machines: The GDPR’s ‘Right to Expla-
nation’ Debate and the Rise of Algorithmic Audits in Enterprise Berkeley Technology
Law Journal, 2018. p. 19. Available at:<https://ssrn.com/abstract=3143325> Access: 30
jun 2019.
should be prepared to meet the expectations and requirements of the
supervision:
219 CASEY, B. FARHANGI, A.; VOGL, R., Rethinking Explainable Machines: The GD-
PR’s ‘Right to Explanation’ Debate and the Rise of Algorithmic Audits in Enterprise
Berkeley Technology Law Journal, 2018. p. 39. Available at: https://ssrn.com/ab-
stract=3143325. Access: 30 jun 2019.
are no longer necessary for legitimate purposes. Also, the GDPR
provides, unlike Brazilian law, the right to de-index the data of a
particular search engine (right to be forgotten).
From the computational point of view, it is possible to raise
some questions about the effectiveness and problems of the right-to-
erasure provisions related to the functioning of artificial intelligence
systems.
According to Villaronga, Kieseberg and Li, dealing with
the human memory and the algorithmic memory as if they were
equal reflects ignorance on the workings of artificial intelligence.
Consequently, these authors state that the European regulation failed
to express algorithmic reality in its provisions on the right to be
forgotten220.
It is believed that the erasure of data from a particular
processing system may generate unwanted effects to the algorithms.
The fundament for this statement is that some algorithms revisit data,
inserted in the system for years, for its learning and decision making.
Thus, it may be that erasing data from the system, especially if done
on a large scale, may lead to inaccurate and unwanted results from the
artificial intelligence program.
In 2016, the article The Right to Be Forgotten: Towards Machine
Learning on Perturbed Knowledge Bases221 was published as a result
of a survey carried out by SBA Research222 and the Holzinger Group –
220 “Our current law appears to treat human and machine memory alike – supporting
a fictitious understanding of memory and forgetting that does not comport with reali-
ty”. VILLARONGA, Eduard Fosch, KIESEBERG, Peter, and LI, Tiffany. Humans Forget,
Machines Remember: Artificial Intelligence and the Right to Be Forgotten Computer
Security & Law Review, 2017. Available at <https://ssrn.com/abstract=3018186>. Ac-
cess: 30 jun 2019
221 KIESEBERG, P., MALLE, B., FRUHWIRT, P., WEIPPL, E., HOLZINGER, A. The
Right to Be Forgotten: Towards Machine Learning on Perturbed Knowledge Bases. F.
Buccafurri et al. (Eds.): CD- ARES 2016, LNCS 9817, pp. 251–266. 2016.
222 More information on SBA Research can be found at: https://www.sba-research.
org/about/. Access: 30 jun 2019.
HCI-KDD223. These researchers performed some experiments of data
erasure of a system, in order to verify the effects of this elimination for
the system operation. The authors state:
226 Article 5 (1) of the GDPR. Regulation (EU) 2016/679 of the European parliament
and of the council of 27 April 2016. Available at: <https://eur-lex.europa.eu/legal-con-
tent/EN/TXT/PDF/?uri=CELEX:32016R0679&from=PT> Access: 30 jun 2019.
227 Article 5 (1)(b) of the GDPR: “Personal data shall be […] collected for specified,
explicit and legitimate purposes and not further processed in a manner that is incom-
patible with those purposes; further processing for archiving purposes in the public
interest, scientific or historical research purposes or statistical purposes shall, in ac-
cordance with Article 89(1), not be considered to be incompatible with the initial pur-
poses (‘purpose limitation’);”. Regulation (EU) 2016/679 of the European parliament
What happens is that the data controllers do not always know
at first the purpose of the data prospection and this does not necessarily
imply prejudice to the data subject. According to Bioni, if we consider
that the Big Data systems are technologies that allow reusing the same
database for different purposes, then its use would be incompatible
with the normative dynamics centered on specific consent.228
Likewise, the Information Commissioner’s Office - ICO,
UK’s independent authority to defend data and information rights,
has stated in the Policy Paper titled “Big Data, Artificial Intelligence,
Machine Learning and Data Protection” that the peculiarity of artificial
intelligence is that they do not linearly analyze data, as they were
originally programmed to do. Instead, intelligent systems learn from
the new data inputs to respond independently and adapt the outputs
according to their learning229.
In other words, it may be difficult for data controllers to
establish, at the outset, the purpose of data processing, as well as
challenging for them to program the system so it only delivers outputs
that take into consideration the original goals of the data processing.
Facing the difficulty of guaranteeing the full compliance with
the requirement set forth in Article 5 (1) (b) of GDPR, and considering
the heavy fines imposed by this regulation in case of non-compliance
with its rules, it is necessary to take into account the discouragement
that these provisions may generate in terms of investment in new
technologies.
230 42How Europe’s ‘breakthrough’ privacy law takes on Facebook and Google. Avail-
able at: https://www.theguardian.com/technology/2018/apr/19/gdpr-facebook-goo-
gle-amazon-data-privacy-regulation. Access: 30 jun 2019.
the answer was that for companies that do not have the same features
as Facebook and Google to suit GDPR would be much more costly.
This review mentioned a survey by PricewaterhouseCoopers,
which indicates that 68% (sixty-eight percent) of US companies were
expecting to spend between $ 1 million and $ 10 million to comply
with the new regulation. These figures indicate that, in addition to the
fines, adjusting the business to GDPR could already be too costly for
smaller companies.
Thus, it is believed that small companies cannot possibly
meet the costs, either to comply with GDPR or to pay fines, in case
of non-compliance with the regulation. In this way, the stipulation of
these fines can culminate in a technology market even more restricted
to a limited group of companies.
Conclusion
231 For some statistics on the investments and growth of AI the following is recom-
mended: McKinsey’s State Of Machine Learning And AI. Available at: https://www.
forbes.com/sites/louiscolumbus/2017/07/09/mckinseys-state-of-machine-learning-
and-ai- 2017/#35bbd4d975b6. Access: 30 jun 2019.
technologies is generating euphoria among people, especially to those
that display disruptive potential.
On the other hand, personal data, which in various legal
systems is worthy of ample protection, has become an increasingly
valuable commodity and thus its processing becomes the target of the
largest companies in the world. In this scenario, the sensation of loss
of control over the data and insecurity about the purpose of its use is
intensifying.
Given this shared feeling among people, the urge to give
protection to personal data is raised. This necessity had as a starting
point the publication of GDPR and the rising of other legislative
initiatives on data protection in the world.
Knowing the data misuse scandals and being aware of the
vulnerability of the data subject, abandoning protectionist discourse to
launch a look at innovation is not an easy task. However, it is a necessary
one. Thus, the article intended to pinch some GDPR provisions and
observe them under magnifying glasses, to verify if their rules have
the potential to curtail investment in artificial intelligence systems,
reflecting negatively on the technological development.
Analyzing the GDPR articles on the (i) right to object to an
automated individual decision making; (ii) right to an explanation of
the logic involved in automatic personal data processing; (iii) right to
erasure and right to be forgotten; (iv) data minimization principle; (v)
obligation to name a specific purpose for the data processing; and (vi)
fines, it has been recognized that there are at least indications that
GDPR can generate negative impacts on the increasing investments in
artificial intelligence, which could even entail competitive losses for
the European Union.
This is due, in particular, to the high costs that companies
will incur, either to comply with GDPR rules or to pay fines for non-
compliance with the regulation. Furthermore, in technical terms,
compliance with certain GDPR principles - such as the specific
purposes of data processing and the minimization of data - can lead to
damage to artificial intelligence systems, or make them less useful and
accurate. Moreover, it is possible that a limited group of companies,
with booming revenues, such as the case of technology giants (Google,
Facebook, Samsung, among others), can adapt to GDPR without having
a significant impact on their operations. However, for emerging
artificial intelligence companies, GDPR’s strict rules can mean an
irremediable blow to their progress, representing entry barriers to
new companies in the technology market.
Although evidence has been found that GDPR may impact
the development of new technologies, making an absolute statement
regarding this regulation only after approximately one year of its
coming into force would be excessively daring for this article. What
can be rightly stated is that it is difficult to set the balance between
data protection and artificial intelligence functionalities.
In light of the mentioned above, the application of the
GDPR and other GDPR’s modeled regulations will face the challenge
of keeping a healthy dialogue between protection and innovation.
Therefore, it is expected that the side effects of data subject protection
don’t jeopardize the Artificial Intelligence growing market, which has
been providing great positive development in society.
References
Abstract
Considering that collaboration contracts are long-term,
contracts between parties that allocate strategically the identified
risks, these are naturally incomplete pacts. Thus, the authors intend
to analyze how artificial intelligence is supposed to help reducing
incompleteness in these contracts according to their characteristics.
This issue will be analyzed in the pre-contractual phase, with emphasis
on the causes of incompleteness. Thereafter, some impacts will be
examined during the contract performance to provide solutions
to problems arising from incompleteness. Finally, the authors’
conclusion and their impressions about the future of the theme will
be presented.
Keywords
Artificial intelligence. Incomplete contracts. Commercial
contracts. Collaboration contracts.
Introduction
232 Also called commercial contracts, as this is the nomenclature of Title IV of the 1850
Commercial Code, or commercial contracts. In the present work, all these expressions
will refer to the same institute: contracts signed between business parties. From the
legislative point of view, the recent Law of Economic Freedom (Law nº 13.874/2019),
which inserted art. 421-A in the Civil Code of 2002 and expressly mentioned “civil and
commercial contracts”, in order to conclude that there are two different legal natures.
233 The doctrine is not uniform as to the definition of commercial contract. Paula For-
gioni, for example, argues that it is necessary that “[...] the link be established exclu-
sively between companies” (FORGIONI, Paula A. Contratos Empresariais: teoria geral e
aplicação. 2. ed. rev., atual., e ampl. São Paulo: Editora Revista dos Tribunais, 2016, p.
28. Em sentido diverso, Haroldo Verçosa entende que o contrato mercantil pode ser
configurado desde que uma das partes seja empresária e a outra não seja consumi-
dora (VERÇOSA, Haroldo Malheiros Duclerc. Contratos Mercantis e a Teoria Geral dos
Contratos: o Código Civil de 2002 e a Crise do Contrato. São Paulo: Quartier Latin, 2010,
pp. 24-25).
234 In the Commercial Code of 1850, contracts for commercial mandate, commercial
commission, commercial purchase and sale, barter or exchange, commercial lease,
commercial loan, commercial surety, mortgage and commercial pledge and commer-
cial deposit (Titles VI to XIV) were typified. In the special legislation, we highlight the
Commercial Representation Law (Law No. 4,886 / 1965), the Commercial Concession
Law between producers and distributors of motor vehicles by land (Law No. 6,729 /
the peculiar characteristics of each species, without devoting to the
development of a general theory of commercial contracts.235
The legislation, whereas, cannot keep up with the dynamics
of business life. In fact, it is impossible to issue a specific law that
regulates each type of contract arising from the new activities, given
that certain contracts are hybrids and contemplate characteristics of
several types of business. In addition, innovative ventures must, first -
and in their great majority -, submit themselves to the approval of the
market itself, in order, in case they survive, to comply with the Law.
Recently, however, the doctrine has started to look at the
particularities of commercial contracts and the development of a
general theory that, based on their logic and economic operation,
establishes its basis for the interpretation and integration of this type
of agreement, which must be treated in light of its economic function
and the context (market) in which they are inserted, respecting the
distribution of risks agreed upon among the contractors.236
The treatment of general issues is still incipient, especially in
relation to commercial contracts of a hybrid nature, which, in a line of
extremes, are between the classic exchange contracts (at one end) and
the partnership contracts (at the opposite end), sometimes presenting
characteristics of one, sometimes of the other.237
On the one hand, the needs of adequacy and survival of the
economic agent in the market made the business of mere exchange
and immediate enforcement insufficient for long-lasting relationships.
That is because it was verified that agreeing on successive contracts of
1979), the Business Franchise Law (Law No. 8,955 / 1994) among others.
235 For all, the analysis of the classics of Fran Martins and Waldirio Bulgarelli demon-
strates the greater dedication to the special part of the commercial contracts than to
the general part. Cf. MARTINS, Fran. Contratos e obrigações comerciais. 6. ed. Rio de
Janeiro: Forense, 1981, e BULGARELLI, Waldirio. Contratos Mercantis. 5. ed. São Paulo:
Altas, 1990.
236 In this sense, see Paula Forgioni’s work Teoria geral dos contratos empresariais. São
Paulo: Editora Revista dos Tribunais, 2010.
237 WILLIAMSON, Oliver E. The Mechanisms of Governance. Oxford: Oxford University
Press, 1996.
this nature, separately, would not satisfy the interests of the parties.
On the other hand, despite the urgency of establishing a long-term
relationship between the parties, such long term relationship did not
change the hierarchy and rigidity typical of company contracts, which
would deprive the contracting parties of the patrimonial autonomy to
contract with third parties at their own risk.238
In this context, collaboration agreements have arisen, in
which the parties do not necessarily have opposing interests so that the
increase in the economic advantage of one party leads to a decrease
in the benefit of the other - as occurs in exchange agreements - and
they do not share the elements for the association contract pursuant
to Article 981 of the Brazilian Civil Code, especially those in which the
parties choose to support the risk by all or some of the partners. They
represent long-term relationships and, consequently, cooperation
between the parties, in which immediate opportunistic behaviors
tend to give way to planned strategic actions aimed at greater future
benefits. The concept adopted is the diction of Article 456 of the Senate
Bill No. 487/2013, that amends the Commercial Code of 1850, in verbis:
“In business collaboration contracts, one party (collaborator) assumes
the obligation to create, consolidate or expand the market for the
product manufactured or marketed or for the service provided by the
other party (supplier)”. For example, the types contemplated by the
Brazilian legal system are distribution, commercial representation,
concession, and franchising.239
238 This is the reasoning put forward by Paula Forgioni, who adds: “the closer the
hybrid contract is to that of the exchange, the greater the degree of independence
of the parties and the lesser the collaboration between them. As we move gradually
towards societies, the greater the degree of stability of the bond and collaboration.”
(FORGIONI, Paula A., op. cit., pp. 174-175).
239 These are some of the collaboration contracts thus classified by Senate Bill
487/2013, which amends the 1850 Commercial Code, pending before the Federal Sen-
ate. The Project also provides definition for collaborative contracts in its art. 456, in
verbis: “Art. 456. In business collaboration contracts, an entrepreneur (employee)
assumes the obligation to create, consolidate or expand the market for the product
manufactured or marketed or for the service provided by the other entrepreneur
Collaborative contracts are categorized as hybrid also due
to their peculiar characteristics of the economic interdependence
of the parties deriving from specific investments240, but maintaining
contractors’ property, legal, management and administration
autonomy, as well as the differences between their activities and risks
taken in the business.
In addition, the collaboration agreements are, in their
majority, of long duration, concluded for an indefinite term. In fact,
the contracting parties, supported by cooperation, do not establish
only rules for exchange, but also rules that define the relationship
between the parties. In the contractual instrument, therefore, “[...] the
foundations are laid for future collaborative behavior, rather than the
specific order of determined obligations”.241
As they are dealt with successively, cooperation contracts are,
in essence, incomplete.242 In truth, incompleteness is a characteristic
to which any long-term contract is subject, given the impossibility of
foreseeing all future situations and allocating all the risks to which the
parties would be subjected from the moment the bond is formed. And
(supplier).” (BRASIL. Senado Federal. Projeto de Lei nº 487/2013, que altera o Código
Comercial. Available at: http://www.senado.leg.br/atividade/rotinas/materia/getPDF.
asp?t=141614&tp=1. Acess on: 02.27.2016).
240 From an economic perspective, the interdependence between the parties is said
to arise from specific (or idiosyncratic) investments because “[...] it is a consequence
of the specificity of the assets involved in a transaction, since the interruption of a re-
lationship implies costs to those who have invested in such assets” (FARINA, Elizabeth
Maria Mercier Querid; AZEVEDO, Paulo Furquim de; SAES, Maria Sylvia Macchione.
Competitividade, mercado, Estado e organizações. São Paulo: Singular, 1997, p. 82).
241 FORGIONI, Paula A.. Contrato de distribuição. São Paulo: Editora Revista dos Tri-
bunais, 2005, p. 71.
242 “Given their time-delayed characteristic, collaboration contracts usually do not
provide discipline for all the problems that may be experienced by the parties during
their execution. This is because at the time of the conclusion of the pact, it is im-
possible to foresee all situations and to hold all information relating not only to the
negotiation, but to the counterpart and the market conjectures.” (BEZERRA, Andréia
Cristina; PARENTONI, Leonardo Netto. A reconsideração da personalidade jurídica
nos contratos mercantis de colaboração. Revista de Direito Mercantil, Industrial, Econô-
mico e Financeiro, São Paulo, ano L, n. 158, p. 189-210, April/June. 2011, p. 197).
even if it was possible, there would be many contingencies to foresee
and then to describe contractually, increasing the respective costs.
This characteristic led economists to focus, from the 1970s
onwards, on studies of contractual incompleteness, looking from their
causes to possible solutions to achieve greater efficiency. The theory of
incomplete contracts starts from the premises already established by
Ronald Coase’s theory of firm and transaction costs243 and had already
found its basis in the works of Oliver Williamson, who analyzed the
ex post inefficiencies created by the bargaining between the parties
and the incentives for ex ante realization of specific investments in the
relationship to be entered into.244
It is, however, Oliver Hart and his co-authors to whom
it is attributed the basis for developing the theory of incomplete
contracts245, a consequence of the high transaction costs involved
in indicating precise actions that each party should take in every
conceivable eventuality.246
Ian Ayres and Robert Gertner note that, for the first
incompleteness theorists, the parties leave gaps because the costs of
forecasting and writing all the contract terms outweigh the benefits
envisioned from the start. However, Ayres and Gartner articulate a
second cause for the omissions in contracts, concerning the asymmetry
of information: when one party has more knowledge of the business
243 Cf. COASE, Ronald H. The nature of the firm. Economica, New Series, v. 4, n. 16,
pp. 386-405, Nov., 1937.
244 Cf. WILLIAMSON, Oliver E. The Vertical Integration of Production: Market Failure
Considerations. American Economic Review Papers and Proceedings, n. 61, pp. 112-123,
1971; WILLIAMSON, Oliver E. Markets and Hierarchies. Nova Iorque: The Free Press,
1975; WILLIAMSON, Oliver E. Transaction-Cost Economics: The Governance of Con-
tractual Relations. Journal of Law and Economics, n. 22, p. 233-271, 1979.
245 É o que conclui a revisão de bibliografia sobre a obra de Oliver Hart e Bengt Holm-
ström, elaborada por ocasião de sua láurea com o Prêmio Nobel de Economia de 2016
(THE ROYAL SWEDISH ACADEMY OF SCIENCES. Oliver Hart and Bengt Holmström:
Contract Theory: Scientific Background on the Sveriges Riksbak Prize in Economic Sciences
in Memory of Alfred Nobel 2016. Estocolmo, 2016, p. 17).
246 HART, Oliver; MOORE, John. Incomplete contracts and renegotiation. Working pa-
per department of economics, Massachusetts Institute of Technology, n. 367, p. 1-44, Jan.,
1985, p. 1.
than the other, the first party can decide not to disclose it ( motivated
by a lack of time, lack of conditions to make it available, or even
because it does not bring benefits to it). The party could also choose
to have certain circumstances in the contract in a deliberate way, such
as avoiding the imputation of penalties in certain future situations,
which would be known to cause damages247. The other party, in turn,
may accept to run the risk of entering into an incomplete contract
because the transaction costs involved in obtaining the information
in the pre-contractual phase are high, leaving the execution to resolve
any issues that arise.
According to Paula Bandeira, when examining contractual
incompleteness from the perspective of an economic analysis of law,
an incomplete contract does not regulate the effects that possible
contingencies, if implemented, could immediately generate in the
business, which allows an “[...] opening of the contractual regulation,
which, due to changes in the economic environment, would be
submitted to the subsequent definition of missing elements”248.
Once the supervening fact not foreseen by the parties is verified, the
objective of renegotiating may emerge firmly, which might lead to
opportunism.
And how can opportunistic behaviors be restrained in the
face of (and from) the incompleteness that has now been pointed
out? The author points out that one of the economic functions of
contractual law is to prevent opportunism from parties249. Just as
the level of omissions will depend on the risks and costs involved in
247 AYRES, Ian; GERTNER, Robert. Filling gaps in incomplete contracts: an econom-
ic theory of default rules. The Yale Law Journal, v. 99:87, n. 1545, p. 87-130, 1989, p.
127. Available at: http://digitalcommons.law.yale.edu/fss_papers/1545. Access on:
09.07.2017.
248 BANDEIRA, Paula Greco. O Contrato Incompleto e a Análise Econômica do Di-
reito. Revista Quaestio Iuris, Rio de Janeiro, vol. 08, n. 04, 2015, pp. 2696-2718, p. 2705.
249 Ibidem, p. 2703.
making the contract more complete, it will also be influenced by the
rules of interpretation that will apply to it.
Paula Forgioni, in defense of the development of the dogma
to erect the autonomous legal discipline of commercial contracts, in
addition to contributing with vectors and limits for the interpretation
of this category of legal arrangements250, reports some solutions for
the integration of gaps in incomplete contracts251. Among them,
the author lists: i) resorting to uses and practices252 and bona fide; ii)
the adaptation of the agreement by the parties, through hardship
and renegotiation clauses in case of events not contemplated in the
contract253; and iii) the attribution of decision power to third parties
with technical competence to complete the agreement, i.e. the
Judiciary or arbitration.
250 FORGIONI, Paula A. Teoria geral dos contratos empresariais. São Paulo: Editora Re-
vista dos Tribunais, 2010. Para vetores de funcionamento dos contratos mercantis,
confira-se o Capítulo II, pp. 55-150. Para interpretação, Capítulo IV, pp. 215-246.
251 The author points out that, while interpreting, the text starts to unfold its mean-
ing, in the integration, of the lack of express prediction about the treatment that
should be given to a supervening fact, the interpreter can complement the agreement.
(FORGIONI, Paula A. Contratos empresariais: teoria geral e aplicação. 2. ed. rev., atual.
e ampl. São Paulo: Editora Revista dos Tribunais, 2016, pp. 268-280).
252 Referring to revoked Article 133 of the 1850 Commercial Code, which provided:
“Art. 133 - If the clauses necessary for its execution are omitted in the wording of the
contract, it must be assumed that the parties have subjected themselves to what is
of use and practice in such cases among traders, instead of the execution of the con-
tract.”
253 Common in international contracts, the hardship clauses authorize the parties
to request changes in the face of supervening events that disturb the balance of the
contract, as extracted from art. 6.2.3 of UNIDROIT Principles: “In case of hardship
the disadvantaged party is entitled to request renegotiations” (UNIDROIT. UNIDROIT
Principles of International Contracts. Unidroit: Roma, 2010. Availablet at: http://
www.unidroit.org/english/principles/contracts/principles2010/integralversionprinci-
ples2010-e.pdf. Access on: 09.08.2017). Pode-se até estipular que a empresa onerada
suspenda o adimplemento da obrigação até a solução do impasse: “[...] nowadays it
seems to be undisputed that, wherever the right to claim performance would under-
mine the obligor’s exemption, performance cannot be demanded as long as the im-
pedtiment exists.” (SCHWENZER, Ingeborg. Force Majeure and Hardship in Interna-
tional Sales Contracts. Victoria University of Wellington Law Review, v. 39, p. 709-725,
2008, p. 720).
Sticking out, regarding collaboration contracts, is the solution
of having, in the contractual instrument, clauses that oblige the parties
to negotiate or even to adapt the clause in case events that alter the
balance of the business occur. This is because, although collaborative
contracts are premised on solidarity between the parties; they are
still commercial contracts, guided by their own principle which
presupposes diligence prior to the agreement to identify and define
future risks.
In studying the duty of cooperation in long-term contracts,
Giuliana Schunk concludes that, in situations of contractual
incompleteness, the parties are practically obliged to renegotiate
some terms due to the contingencies and subsequent situations which
the parties were unaware of or did not foresee in the instrument.254
Anderson Schreiber strongly defends the need to recognize
a duty to renegotiate unbalanced contracts in Brazilian law, as a
constitutional expression of the value of social solidarity and the
resulting infra-constitutional rules, such as the general clause of
objective bona fide.
In his view, considering the attached duties generated by the
general clause of objective bona fide and the imposition of a standard
of conduct on both contractors of reciprocal cooperation to achieve
the practical result that justifies the contract entered into, the duty to
renegotiate is derived from it - even if there is no express clause - and
there is no need for a specific rule in Brazilian law establishing the
duty to renegotiate:
254 SCHUNK, Giuliana Bonanno. Contratos de longo prazo e dever de cooperação. Thesis
(Doctorate in Civil Law) –University of São Paulo’s Law Schoool, São Paulo, 2013, p. 49.
contract, constitute behavioral duties which, although
instrumentalized to recover the contractual balance,
derive, strictly speaking, from the need for the parties
to cooperate with each other to achieve the contractual
scope. Thus, it must be concluded that the recognition
of the duty to renegotiate, between us [Brazilians], finds
normative basis in the general clause of objective bona
fide, more specifically in Article 422 of the Civil Code. 255
257 “It is difficult to accept the reliance which is here placed upon notions of ‘good
faith’ to support the existence of such a bargaining duty. The fundamental difficulty
which is produced lies in seeing how a party can be in bad faith on the ground that
she has refused to give up the rights which she enjoys under the contract. Of course,
where the disadvantage which has been produced by the underforeseen event is ex-
treme, then the contract may be held to be frustrated and, in such case, the court will
be called upon to identify the rights of the parties under the discharge of the contract.
But the situation is altogether different where the contract remains on foot, but one
party is alleged to be in bad faith because she has refused to give up her contractu-
al right to demand that the contract be performed according to its original terms.”
(MCKENDRICK, Ewan. The regulation of long-term contracts in English Law. In:
BEATSON, Jack; FRIEDMANN, Daniel (Coord.). Good faith and fault in contract law. Ox-
ford: Clarendon Press, 2001, p. 315). In favor of mandatory renegotiation clauses, see:
SPEIDEL, Richard E. Court-imposed price adjustments under long-term contracts. In:
Northwestern University Law Review, 1981, 369, p. 404.
258 Fernando Noronha teaches that contractual justice is “[...] the equivalence relation
that is established in the exchange relations, in such a way that neither party gives
more or less of the value it received.” (NORONHA, Fernando. O direito dos contratos e
seus princípios fundamentais: autonomia privada, boa-fé, justiça contratual. São Paulo:
Saraiva, 1994, p. 214).
259 FRAZÃO, Ana. Os contratos híbridos. São Paulo: Associação dos Advogados de São
Paulo, 18 mai. 2017. Anais eletrônicos do 7º Congresso Brasileiro de Direito Comercial.
Available at: http://www.congressodireitocomercial.org.br/site/anais-eletronicos.
Acess on: 08.07.2017.
controversies, reducing them? This is what we are trying to outline in
the following paragraphs.
260 COASE, Ronald H. The nature of the firm. Economica, New Series, v. 4, n. 16, p.
386-405, Nov., 1937, p. 392.
enforceable contracts, which, from this perspective, are complete”.261
The ex post costs, by their turn, are those related to renegotiation to
adjust the business relationship to the events, to the cost of settlement
of disputes and to the cost of ensuring compliance with obligations,
and, due to their specificities, will be examined in a separate section.
When the costs of predicting and writing all the contractual
specificities exceed the benefits expected by the parties, to begin
with, the transaction costs are high, causing incompleteness in the
pact to be executed. Possible solutions to reduce these costs would be
specific investments in information and due diligence efforts capable
of making the parties aware of the maximum number of possible
situations that may occur in the long-term relationship so that they
can previously stipulate clauses in this regard.
And it is in the reduction of transaction costs involved
with contractual incompleteness that AI will impact such form of
agreement, granting the parties greater predictability in specifying
future circumstances and the consequences they may generate. In
this context, Avery Katz focuses on proposals that the parties, with
the help of their lawyers, can work to reduce the gaps, such as ex
ante investments (before the contract is signed) to reduce the cost of
subsequent complementation, such as further studies and analysis of
the business conditions to avoid gaps.262
In fact, one cannot deal with the issue at hand without
addressing the impacts that AI has had on legal activities, especially
those developed by lawyers, such as the drafting of contracts. Some
authors even discuss and call this phenomenon the “uberization” of
the legal industry,263 foreseeing drastic changes to the future of lawyers.
264 Companies and start-ups that mix technology with the legal market and offer al-
ternatives to the eminently humane contractual elaboration, in a faster, easier and
more efficient way, that avoids failures such as the lack or excess of certain clauses
in the contract. Examples are Contractually, Clausehound, LegalZoom, and Dragon
Law. Among them, Clausehound offers Playbook, where the party gives the informa-
tion, such as what is essential in the contract, fills out some forms to indicate your
will and the site delivers the contract prepared (https://www.clausehound.com/play-
books/). This plan is free, and the risks about the adoption of certain types of clauses
are informed to the parties with notices on the site, legal articles and comments from
users to guide their decisions. Other more complex plans involve advice from law-
yers during the lawsuit, or even the revision of the contract by a lawyer (Available at:
https://www.clausehound.com/signup. Access on: 07.20.2018).
265 NG, Irene. The Art of Contract Drafting in the Age of Artificial Intelligence: A Com-
parative Study Based on US, UK and Austrian Law. Stanford-Vienna TTLF Working Paper,
n. 26, 2017, p. 5/6.
266 Ibidem, p. 9.
267 Ibidem, p. 17.
For the present study, both categories outlined are of interest,
and no distinction will be made between them. After all, all these
situations include, to a greater or lesser extent, the use of AI. The
author recalls that the ability to draw up contracts is a skill that takes
time to improve. Considering that the learning time of AI is much
shorter than what takes a human being to perform the same task, the
machine’s performance in the elaboration of contracts may be hors
concours.
On the subject, Dana Remus states that the likely path for legal
AI will be shaped by two propositions: (i) for the machine to automate
the task of a lawyer, it will be necessary to model the processing
of information of this professional in a set of instructions, which
will only serve for structured tasks that follow a pattern and can be
covered by machine learning; and (ii) the AI models that use machine
learning will have difficulties in processing contingencies that differ
significantly from the data for which they were trained, what reveals
difficulties mainly in predicting situations never before occurred in
the history of the party.268
The first proposition can be illustrated by the example of an
AI that is nurtured by inputs from various contract templates and,
from the processing of the sentence blocks, paragraphs and clauses of
these legal instruments, can arrange those that best fit the purpose for
which it is programmed, forming a new contract. When the standard
is simple, with simple clauses, so will AI activity be. Another example
is already offered by the Clausehound start-up, which provides
the service of pointing out gaps in the contractual draft sent by the
counterparty from comparisons to the database of contracts that the
party has nurtured the AI, setting standards for identifying missing or
excess clauses269. This situation reduces contractual incompleteness
268 REMUS, Dana; LEVY, Frank S., Can Robots Be Lawyers? Computers, Lawyers, and
the Practice of Law. November, 2016. Available at: https://ssrn.com/abstract=2701092.
Access on: 07.20.2017, p. 48.
269 NG, Irene. The Art of Contract Drafting in the Age of Artificial Intelligence: A Com-
from a static point of view, i.e. the clauses that parties’ practice
have already shown to be necessary, but does not contribute much
to reducing incompleteness from a dynamic point of view, i.e. by
examining what future situations may occur from the input standards,
and which human beings have not yet been able to foresee.
The second proposition gains emphasis on more complex
situations. Therefore, by analyzing the main activities of lawyers and
measuring how the progress of AI can impact them, Dana Remus
identifies as moderate the impact that AI can have on the drafting of
contracts and other documents. For more complex situations, along
with contractual drafting, human legal advice will be required, which
is an activity that will be little impacted by AI according to the author.270
In more dense phatic assumptions, such as in business
collaboration contracts, the database to be absorbed and digested by
AI is larger. Consequently, the contracting party demands more than
a mere statistical forecast: it requires its lawyer to understand its
objectives, interests, and all the meta legal aspects involved, aspects
that are essential to be considered when drafting a contract.
Thus, no matter how much AI partially reduces transaction
costs in commercial contracts, it will not eliminate them, since, in
this complex type of relationship, other factors must be considered to
reduce incompleteness and the consequences generated by it.
On another take, one can also dwell on whether the
development of AI in the design of contracts with data analysis and
predictability of standards needing to be regulated by the agreement
in the pre-contractual phase cannot also increase transaction costs
by redirecting them towards the best solution offered by AI. In other
words, instead of reducing the parties’ costs, AI supplements would
parative Study Based on US, UK and Austrian Law. Stanford-Vienna TTLF Working Paper,
n. 26, 2017, p. 18.
270 REMUS, Dana; LEVY, Frank S., Can Robots Be Lawyers? Computers, Lawyers, and
the Practice of Law. Novembro, 2016. Available at: https://ssrn.com/abstract=2701092.
Access on July, 2017, p. 22.
be so beneficial that there would be a race to find the machine that
provides more strategic advantages in data analysis and proposing
solutions to potentially controversial issues that may arise in the
future.
A lesser cost with lawyers may result in an additional cost with
AI, and the latter may even be higher depending on the complexity of
the collaboration commercial contract and the amount of data to be
analyzed. At this point, Kevin Kelly’s provocation stands out: it is not
necessarily AI that will grow exponentially, but the inputs to it, i.e.,
data provision. This is why the results of acceleration of technology
will give origin to an extra-human, and not to a super-human, who
will have abilities beyond the human possibility,271 such as analysis,
processing and learning of collected data (machine learning) in order
to offer concrete solutions in unfeasible time for the human being.
271 KELLY, Kevin - The Myth of a Superhuman AI. Wired, April, 2017. Available at: https://
www.wired.com/2017/04/the-myth-of-a-superhuman-ai/. Acsess on: 07.20.2018.
not being able to predict all future contingencies,272 to the lack of time
or conditions to make them available, and even the opportunism of
the parties to strategically hide them because disclosure does not
bring them benefits.273
In order to circumvent the limited rationality and thus reduce
information asymmetry, it is important that the AI predictability
system has a sufficiently strong level of accuracy to overcome human
limitations and be more efficient. To this end, the AI system needs
to be programmed with the ability to collect data and learn from
the environment in which it is inserted so that its results are better
evaluated.
Here, a basic principle applies, when using statistics to make
predictions: predictions are more accurate when the sample size at
which the test is done is larger. Similarly, in the case of AI: the greater
the amount of data collected, the greater the accuracy of the result
offered by the system.274
In case of asymmetry of information, that might be generated
by the difficulty of one of the parties to study all the information/
documentation provided for analysis, an AI system that surpasses
the human capacity to digest and refine all this database on time,
to propose results and alert about the existing risks, would be very
welcome. This would end the strategy of burying the counterpart in
272 In Oliver Williamson words, limited rationality “[r]efers to behavior that is indend-
edly rational but only limitedly so; it is a condition of limited cognitive competence to
receive, store, retrieve, and process information. All complex contracts are unavoid-
ably incomplete because of bounds on rationality.” (WILLIAMSON, Oliver E. The Mech-
anisms of Governance. Oxford: Oxford University Press, 1996, p. 377).
273 AYRES, Ian; GERTNER, Robert. Filling gaps in incomplete contracts: an econom-
ic theory of default rules. The Yale Law Journal, v. 99:87, n. 1545, p. 87-130, 1989, p.
127. Available at: http://digitalcommons.law.yale.edu/fss_papers/1545. Access on:
09.07.2017.
274 NG, Irene. The Art of Contract Drafting in the Age of Artificial Intelligence: A Com-
parative Study Based on US, UK and Austrian Law. Stanford-Vienna TTLF Working Paper,
n. 26, 2017, p. 22.
many documents in the pre-contractual phase, aware that there will
not be time to refine all the information just with human effort.
The accuracy of results provided by AI would also depend on
the purpose for which it was programmed. Thus, in addition to data
collection and processing, it would be necessary for the AI system to
have, as its clear scope, the identification of parties’ standards on the
object of the contract, which would later be critically examined in the
drafting of the clauses - by the lawyers or by the machine itself, which
is more complex, as seen above.
In terms of commercial contracts, it should be remembered
that the parties, because they exercise business activity, are aware
(even if limitedly, as shown) of the risks and must take all the necessary
precautions and diligences to safeguard their interests. Thus, AI’s
assistance would be immense in reducing the distances caused by
asymmetry of information. However, it is still essential to be guided
by the main scope of the commercial contracts under consideration
here: collaboration. Thus, before there are systems of AI optimized
enough to examine the information provided by the counterparty,
reducing the asymmetry in this respect, it is of paramount importance
that the parties provide the information.
In the context of collaborative contracts, considering the
purpose for which they are entered into, reflected in the very name
of such arrangements, it is reasonable to require transparency and
cooperation from the parties from the pre-contractual stage, which
can legally be inferred from the principle of objective bona fide.275
On the other hand, it should not be forgotten that the cut-off point
of this analysis is commercial contracts, where strategic behavior is a
characteristic of the business and cannot be demonized, as it is part
of the game.
275 SCHUNK, Giuliana Bonanno. Contratos de longo prazo e dever de cooperação. Thesis
(Doctorate in Civil Law) –University of São Paulo’s Law School, São Paulo, 2013, p. 49.
In fact, the problem continues to arise when information
is not made available, which, in circumstances where the parties do
not avail themselves of the AI, a conflict arises between the duty of
transparency and the possibility of strategic and opportunistic action
by the party with such information.
In this case, AI’s contribution would be through the
examination of indirect data, obtained through the context and the
environment, which is sometimes very difficult to achieve. An example
would be the collection of data in lawsuits in which the contracting
party has appeared with other parties on the same or similar topics.
Examination of this data may allow the AI system to inflect patterns
and predict results, as well as inform its level of accuracy. Given
the results, it would ultimately be up to the entrepreneur to decide
whether or not to contract.
The lack of information would have to be resolved in advance
between the parties to generate the obligation to make it available.
The AI system could not only rely on contract templates available
free of charge on the Internet to feed its database. In fact, to be more
precise, the most appropriate contract drafts to feed the system are
those provided by the parties themselves, mainly by the supplier, who
holds the market share that will be expanded, and who most likely has
more experience in hiring this type of contract than the other party.
The issue should be better analyzed, notably to avoid that AI is used as
an opportunistic domination mechanism by the supplier, even though
the dependence created between the parties is a possible element in
collaborative commercial contracts.276
Still, on this cause of incompleteness, it is necessary to point
out that asymmetry of information is a natural part of commercial
contracts and is part of the parties’ risk. When contracting, they are
276 In this sense, cf. COELHO, Fábio Ulhoa. As obrigações empresariais. In: COELHO,
Fábio Ulhoa (Coord.). Tratado de direito comercial: obrigações e contratos empresariais.
V. 5. São Paulo: Saraiva, 2015, pp. 13-20.
aware of this. Thus, any failure by AI to verify the information cannot
have an impact on the contract itself, but only on the relationship
between the party that used the AI to assist it and the developer/
producer/supplier, which goes beyond the limits of this paper and
refers the reader to this compilation’s chapters on tort law.
And when both parties use the same AI system to collect
and analyze data to prepare the contract? In this hypothesis, the
informational asymmetry, which would be strategic in normal
situations for commercial contracts, would move to the second plain,
giving way to effective collaboration, since the parties would be, from
the beginning, willing to use a mechanism to reduce incompleteness
of the contract to be signed and possible future problems.
278 Or, as explained by Irene Ng: “The AI system assesses information that is fed into
it, and subsequently makes inferences based on the data it has received by attempt-
ing to make connections and relationships amongst the different data that it receives.
Upon making the relevant inferences, the AI system will then attempt to predict out-
comes.” (NG, Irene. The Art of Contract Drafting in the Age of Artificial Intelligence: A
Comparative Study Based on US, UK and Austrian Law. Stanford-Vienna TTLF Working
Paper, n. 26, 2017, p. 22).
279 NANNI, Giovanni Ettore. A obrigação de renegociar no direito contratual brasile-
iro. Revista do Advogado, São Paulo, v. 116, 2012, p. 96.
a consensual solution based on contractual clauses that provide for
“renegotiation in good faith”, or even in their absence.280
In fact, the social utility of legitimately entered into business
and the need to avoid transaction costs that burden the parties led
the operators of law to seek remedies that privilege rebalancing
or supplementing the contract and maintaining the contractual
relationship, “[...] breaking the dogma of preference for solutions that
lead to the termination of the bond between the parties (cancellation/
resolution)”.281
However, these remedies have been less and less sought
by the parties in judicial litigation. If before the intervention of the
Judiciary was already reticent and judicious in contractual revision,
with changes more focused on the modification of price readjustment
indexes or the extension of compliance deadlines,282 currently, with the
effectiveness of the Economic Freedom Act (Law No. 13.874/2019) and
the changes made in the Brazilian Civil Code regarding the minimum
judicial intervention and the exceptionality of contractual revision, it
is imagined that the search for extrajudicial renegotiation solutions
will be even more intense.283
Conclusion
286 CASEY, Anthony J.; NIBLETT, Anthony. The Death of Rules and Standards. Indiana
Law Journal, v. 92, n. 4, p. 1401-1447, 2017.
287 VILLELA, João Baptista. Por uma nova teoria dos contratos. Revista de Direito e
Estudos Sociais, Coimbra, ano XX, nºs. 2-3-4, p. 313-338, abr./dez. 1975, p. 336.
Even though the use of AI in the drafting of contracts is
nothing new, it is clear that the solutions that exist today cannot
yet completely replace human activity in the drafting and legal
advice around a complex, long-term contract, such as collaborative
commercial contracts.
For the time being, in fact, AI is unable to complete contracts,
but the scenarios presented show that there are no limits to the extent
to which, one day, the aid from the machine will be such as to reduce
or even completely mitigate the causes of contractual incompleteness,
making this economic theory a further part of history.
References
Abstract
This paper demonstrates how competition, financial
regulation and innovative technology converge in the Brazilian
lending market. We (i) assess the state of innovation-driven technology
in the financial sector; (ii) study how the problem of information
asymmetry constitutes a relevant barrier to entering the lending
market, since it prevents entrants from acquiring financial data on
their potential customers; (iii) relativize the traditional premise that
more competition in the financial market equals more instability and
systemic risk; (iv) appoint the role of financial authorities – focusing
on the Central Bank of Brazil – in promoting competition in the credit
market; and (v) demonstrate how regulatory regimes which mandate
data sharing between financial institutions (such as credit bureaus and
open banking) enhance new lenders’ capacity to compete, especially
by enabling artificial intelligence systems to identify those financial
consumers with the highest profitability and lower risk of default.
Keywords
Regulation. Competition. Central Bank of Brazil. Innovative
technology.
Introduction
288 For the specific studies concerning/ratifying said evidences and issues, please
refer to (1) BARBOSA, Klênio; ROCHA, Bruno; SALAZAR, Fernando. Assessing Com-
petition in Banking Industry: a multiproduct approach. Journal of Banking & Finance,
vol 50, p.340-362. Amsterdã: Elsevier, January 2015; (2) JOAQUIM, Gustavo; VAN
DOORNIK, Bernardus. Bank Competition, Cost of Credit and Economic Activity: evidence
from Brazil. Working Paper Series nº 508. Brasília: Central Bank of Brazil, October
2019; (3) ORNELAS; José Renato Haas, SILVA, Marcos Soares; VAN DOORNIK, Bernar-
dus Ferdinandus Nazar. Informational Switching Costs, Bank Competition and the Cost of
Finance. Working Paper Series nº 512. Brasília: Central Bank of Brazil, January 2020;
(4) ALENCAR, Leonardo; ANDRADE, Rodrigo, BARBOSA, Klenio. Bank Competition
and the Limits of Creditor’s Protection Reforms. XII Annual Seminar on Risk, Financial
Stability and Banking. São Paulo: Central Bank of Brazil, 2017; and (5) STANDARD &
POOR GLOBAL RATINGS. Ruptura tecnológica nos bancos de varejo: bancos brasileiros à
altura do desafio (free translation: Technology rupture at retail banks: Brazilian banks
up to the challenge). São Paulo: Standard & Poor’s Financial Services LLC, February
2020.
These factors represent a relevant economic incentive for
entrants to enter the lending market and contest bank profits. In the last
few years, technological innovation and pro-competition regulatory
measures from the Central Bank of Brazil (the “Central Bank”) have
decreased the credit markets’ traditionally high barriers to entry, thus
allowing the insurgence of fintechs – financial technology startups
focused on building new products for lending costumers.
Credit fintechs have been growing at an astounding rate in
Brazil, as evidence by the yearly increase of originated loans and the
massive volumes of investments received from both local and foreign
venture capitalists289. Naturally, the emerging pattern of innovation in
the financial market brings the role of financial regulation into play.
This paper goes against common sense in a way that it does
not go on about how regulation should cope with (or stay out of the
way of) innovation. Rather, we argue why and how a financial authority
– the Central Bank of Brazil – may actively spur innovative entrants to enter
the credit market and contest the incumbent banks. Out of a multitude of
ways through which such stimuli are possible, we focus here on how
regulation can make sure that entrants have access to adequate streams
of data from the lending markets – all in order to feed their artificial
intelligence systems for credit analysis and gain empowerment to
assess customer creditworthiness.
289 PWC BRASIL; ABCD. Nova Fronteira do Crédito no Brasil (free translation: New
Credit Frontier in Brazil). Pesquisa Fintechs de Crédito, 2019.
290 As an example, we may consider how a farmer reaps his harvest, an industrialist
Metaphorically, we may call the union of these flows a symphony
played indefinitely throughout time – each market agent acting as a
musician in a grand orchestra. But the melody renews itself. Instead
of becoming monotone as time goes on, it is continually influenced
by disturbances to the rhythm, which are classified by Schumpeter as
spontaneous and discontinuous, changing the symphony produced by
the past flows.
These disturbances to the market rhythm are precisely the
innovations – ideas put into practice by economic agents by creating
new products and services to offer on the capitalist economy. Such
creation disturbs the melody of the market and aggregates a new sound
to it, as the entrepreneur responsible for the innovation becomes a
new musician, and the orchestral symphony goes on, always renewing
itself – a true cacophony.
This is where Schumpeter’s central idea comes in – the larger
the number of disturbances to the symphonic rhythm of an economy,
the most efficient its economic development. This opinion is sustained
by academia to this day – market innovation generates growth, wealth,
and customer welfare.
Generally speaking, Schumpeter proposes five different kinds
of innovation: (i) a new good or service previously unknown (i.e.,
Microsoft/Apple’s personal computer, social media, smartphones);
(ii) a new process or managerial strategy (i.e., Fordism, toyotism, the
lean startup method291); (iii) discovery of a new target market ripe for
exploration (i.e., Uber to the lower classes); (iv) creation/discovery of
a new raw material for the production of new goods (i.e., an active
buys his goods and processes them into a consumer good, a distributor buys this con-
sumer good in large quantity, resells it to retail stores, and, finally, the end consumer
buys this good for himself and his family. This chain of events is a series of circular
flows which repeats itself sequentially throughout time to satisfy the market’s socio-
economic needs.
291 RIES, Eric. The Lean Start up: how today’s entrepreneurs use continuous innova-
tion to create radically successful businesses. New York: Crown Publishing Group,
2011.
ingredient from a plant for medicinal purposes); and (v) a new
arrangement of industries (i.e., creation of break-up of a monopoly,
like telecommunication and oil)292.
It is exquisite how all five of Schumpeter’s innovations are
employed by the tech companies that have been dominating society
along the transition to the 21st century. We speak of what is often called
the “digital revolution” or the “fourth industrial revolution”, entailing
a massive transition of all aspects of life and economy to the internet
digital plane (and data economy). From this wave arise countless
entrepreneurs293, their startups294, and even the largest companies of
today’s world – the famous big techs295. Advanced economies, such
as the United States, European Union, China, and others296, lean on
constant tech development as essential elements since the 80s297.
292 SCHUMPETER, Joseph. The Theory of Economic Development: an inquiry into prof-
its, capital, credit, interest and the business cycle. Cambridge: Harvard University
Press, 1934.
293 According to Joseph Schumpeter, an entrepreneur is an economic agent bent on
innovation capable of finding new resources (or new ways of combining old resourc-
es) to build a new product for the market. This process is defined as the destruction of
the established economic order – the creative destruction. In SCHUMPETER, Joseph.
The Theory of Economic Development: an inquiry into profits, capital, credit, interest
and the business cycle. Cambridge: Harvard University Press, 1934.
294 Unlikely as it may be that the reader is not familiar with the term “startup”, since it
is defined in many ways across different works, we establish here that we shall follow
the concept set forth by Eric Ries: a startup is a human institution which, under con-
ditions of extreme uncertainty, generates innovation – in products or processes – and
transforms it in a wieldable competitive advantage in its market. In RIES, Eric. The
Lean Start up: how today’s entrepreneurs use continuous innovation to create radically
successful businesses. New York: Crown Publishing Group, 2011.
295 FROST, Jon; GAMBACORTA, Leonardo, HUANG, Yi, SHIN, Hyun Song; ZBINDEN,
Pablo. BigTech and the changing structure of financial intermediation. Bank for Interna-
tional Settlements. BIS Working Papers nº 779.
296 JENG, Leslie; WELLS, Philippe. The determinants of venture capital funding: evi-
dence across countries. Journal of Corporate Finance, v.6. Amsterdã: Elsevier, 2000,
p. 241-289.
297 As an example, Silicon Valley startups has fostered extensive economic growth,
job creation and all-around wealth for the United States. In LE MERLE, Matthew; LE
MERLE; Louis. Capturing the Expected Angel Returns of Angel Investors in Groups: less in
more – diversify. Fifth Era, 2015, p. 5.
Developing economies are equally called upon to elevate their tech
innovation industries as a means to conquer growth and keep up with
their more advanced peers298.
It is equally exquisite how many economic sectors have begun
to invest massively in technology and innovation since the beginning
of the “digital revolution”. Notwithstanding, this is not the case in banking
sectors – banks have not “begun” to invest in tech innovation. Rather,
they already invest in it since centuries ago299. This generations-long
saga has been recently remembered as the result of three different
ages by the chairman of the Bank for International Settlements300 (as
cataloged by researchers from the University of Hong Kong Faculty of
Law)301.
1.1. First Era: From Analog To (traditional) Digital (1866-
1967)
The first age of financial technology was contemporary to
the telegraph, railroads, canals and steamships. Up until World War
I, banks were (i) financing the development of these technologies,
either as investors or debtors; and (ii) adopting these technologies to
guarantee rapid transmission of financial information, transactions
and payments around the world. After World War I, rapid tech
development was slowed up until the end of World War II, after which a
298 YE, Huojie; ZHONG, Shuhua. Business Accelerator Network: a powerful generator of
strategic emerging industries. Ontario International Development Agency (“OIDA”).
International Journal of Sustainable Development, vol. 4, nº 6. Ontario: OIDA, 2012,
p. 16.
299 Please note, however, the scope of this paper relates only to tech innovation, and
not financial innovation (which would include new financial instruments, such as de-
rivatives, collateralized debt instruments, and the like).
300 HERNÁNDEZ, Pablo Cos. Financial technology: the 150-year revolution. Basel Com-
mittee on Banking Supervision (18.11.2019).
301 ARNER, Douglas; BARBERIS, Janos; BUCKLEY, Ross. The Evolution of Fintech: a
new post-crisis paradigm? Research Paper Nº 2015/047. Sydney: University of New
South Wales, 2017.
relevant innovation wave was launched during the 50s and 60s – mostly
credit cards by Bank of America, Diners Club and American Express,
followed by the constitution of the Interbank Card Association, which
later become Mastercard 302.
302 Id, p. 8.
303 REZENDE, Luiz Paulo Fontes. Inovação Tecnológica e a Funcionalidade do Sistema Fi-
nanceiro: uma análise de balanço patrimonial dos bancos no Brasil. Center of Regional
Development and Planning of Economic Sciences University – UFMG. PhD thesis in
Economics. Belo Horizonte: UFMG/Cedeplar, 2012, p. 51.
304 ZIMMERMAN, Eilene. The Evolution of Fintech. New York Times (April 6th, 2016).
Available in: <https://www.nytimes.com/2016/04/07/business/dealbook/the-evolu-
tion-of-fintech.html>. Access on February 2nd, 2020. 16.03.2020.
1.3. Third Era: Modernization/democratization Of Digital
Services (2008-present)
305 ARNER, Douglas; BARBERIS, Janos; BUCKLEY, Ross. The Evolution of Fintech: a
new post-crisis paradigm? Research Paper Nº 2015/047. Sydney: University of New
South Wales, 2017, p. 15.
306 Even if today it is used as quite a flexible term, the word “fintech” was originally
coined by Citibank in the 90s to name its open innovation project called Financial Ser-
vices Technology Consortium. Nowadays, the word is used to describe the most varied of
activities. “[Several] companies have used the word ‘Fintech’ in their names. Some capital-
ized the ‘t’, some didn’t. They included a trader of distressed debt, a software firm serving the
oil and gas and manufacturing markets, and a South African electronics company. Whatev-
er fintech means now or in the future, I doubt another company will be able to claim the word
as its own again”. In HOCHSTEIN, Marc. Fintech (the word that is) evolves. American
Banker (October 5th, 2015).
307 FINANCIAL STABILITY BOARD. Fintech and Market structure in financial services:
market developments and potential financial stability implications (published in Feb-
consumers, relativizing the importance of branches for financial
distribution and disintermediating financial relations308; and (iii)
fintechs adopt agile methodologies to create/test/implement new
technology and focus on improving the consumer experience309.
All of the above is true and applicable to most fintechs in the
known world, notwithstanding how different their businesses are and
how they are all distributed throughout each segment across financial
markets. To name a few, there are hundreds of fintechs in payments,
crowdfunding, financial planning, wealth management, trading,
insurance, data analytics, blockchain, cybersecurity, and – of course
– lending. In this paper, we intend to phase out all other segments
and focus exclusively on fintechs that employ artificial intelligence to
offer loan products in the Brazilian credit market.
312 Vote EMI nº 00040/2018-BACEN-MF, dated October 4th, 2018. The vote was publi-
cized by the Brazilian Central Bank in Request nº 00077000032201930 under the Public
Access to Information Law (the “Lei de Acesso à Informação”), on January 4th, 2019.
313 CARMONA, Alberto; LOMBARDO, Agustín; PASTOR, Rafael; QUIRÓS, Carlota;
GARCÍA, Juan; MUÑOZ, David; MARTÍN, Luis. Competition issues in the area of financial
technology (FinTech). European Parliament. Policy Department for Economic, Scien-
tific and Quality of Life Policies. Dictorate-General for Internal Policies, 2018, p. 27.
314 GOLDMAN SACHS. Future of Finance Fintech’s Brazil Moment. Goldman Sachs Glob-
al Investment Research, 2017, p. 24.
315 FRABASILE, Daniela. Não é questão de ser fintech ou banco: todos terão de ser
digitais. Época Negócios (December 2nd, 2019).
around R$ 817 billion in 2018)316. As such, several fintechs have risen
to provide credit to individuals with difficulties in acquiring loans
from traditional banks317 (for example, Banco Maré, a fintech from Rio
de Janeiro that offers banking products to underserved residents of
the local community).
However, perhaps the most economically relevant strategy
from credit fintechs has been to reach out to Brazilian small
and middle enterprises (the SMEs). Since they do not often have
sophisticated management models, are very different from each
other, have unknown growth prospects and are less capable of
providing adequate collateral, loans to SMEs tend to have unfavorable
interest rate, maturity and volume in the traditional banking system.
In Brazil, as per the International Finance Corporation (IFC), more
than 50% of Brazilian small businesses have null/inadequate access
to credit, and only 10% claim to have full access318-319. Such deficiency
contributes to lesser productivity from these SMEs in comparison to
their peers from developed markets. As such, according to the Central
Bank’s executive board, one of the current key roles in their agenda to
stimulate fintechs is precisely to facilitate Brazilian SME’s access to
credit320.
This is where these fintechs’ access to data and artificial
intelligence come in. Instead of relying on traditional credit analysis
to ascertain the risk of each borrower (which would require a robust
316 PWC BRASIL; ABCD. Nova Fronteira do Crédito no Brasil (free translation: New
Credit Frontier in Brazil). Pesquisa Fintechs de Crédito, 2019, p. 8.
317 Id, p. 6-7.
318 IFC Enterprise Finance Gap Database. International Finance Corporation, 2018.
319 Overall, in Latin American countries, only 12% of credit goes to small compa-
nies (for reference, in OECD member countries, this percentage goes up to 25%. In
HODER, Frank; WAGNER, Michael; SGUERRA, Juliana; BERTOL, Gabriela. Revolução
Fintech: como as inovações digitais estão impulsionando o financiamento às MPME na
América Latina e Caribe. Oliver Wyman, 2016.
320 Vote nº 97/2018-BCB, dated April 23rd, 2018. The vote was publicized by the Brazil-
ian Central Bank in Request nº 18600000942201821 under the Public Access to Infor-
mation Law (the “Lei de Acesso à Informação”), on May 3rd, 2019.
operational structure and probably still phase out small enterprises,
much like traditional banks), credit fintechs build autonomous
algorithms to analyze each potential client’s financial background and
decide whether or not to grant them a loan, and, if positive, on which
interest rate, maximum volume and maturity dates. To function
appropriately, these software applications require vast amounts of
data on the clients to make the right decisions – and financial data in
Brazil has historically been lacking.
321 Credit Scoring Knowledge Guide. International Finance Corporation (IFC), 2012, p.
4.
322 Conceptually, as Akerlof analyzed, there are many markets where the buyer eval-
uates the statistics to decide whether to buy a product. In these statistics, the proba-
bility of a product being of a lower quality than claimed by the seller is considered.
In other words, if the market for a certain product has many “lying” sellers, and it is
not possible to verify the product quality 100% before purchase, buyers may decide
that the risk of buying a product with hidden defects is high in that market and that
you shouldn’t buy it. To illustrate the concept, Akerlof describes an example that has
become famous in the literature, which is the used car market. In this market, there
are used cars that work well and there are used cars that work poorly, the latter called
“lemons”. Anyone who buys a used car does so without knowing if it is good or if it is,
secretly, a lemon. It then becomes a matter of probability, as the consumer cannot
verify at the time of purchase whether the car is good or not, he will decide to buy if
the risk of the product being a lemon is low. The buyer is forced to make this hidden
defect risk assessment because the seller of the product has more information about
the product than the buyer, including whether or not the product is a lemon. It was
Information asymmetry is a very real issue in the credit
market, since borrowers naturally know more about their ability
to perform than lenders. In turn, credit institutions need financial
information about borrowers to determine their risk of default, i.e.,
risk of the borrower not paying the loan on its maturity date(s). In
accordance with each borrower’s risk of default, the lender determines
at which rate, volume and maturity conditions the loan may be given -
the lower the risk, the better the conditions that the credit institution
generally offers.
However, entrants to the credit market have often suffered
from information asymmetry high enough to be considered a relevant
barrier to entry, since consumers’ financial information and credit
history were generally held only by the banks with whom each costumer
had a previous relationship. In other words, traditionally, credit
institutions acquire information on consumers through relationships,
especially the client’s background, and, as the relationship goes on,
the history of banking operations between them (past loans, deposits,
account movements, etc.). Such data acquired through relationships is
called soft information.
In a scenario where only borrowers’ information is only
held by the bank that sells them financial products, (i) only this bank
has the necessary subsidies to determine the risk of default by the
customer, and, from that point, determine loans’ interest rates, total
volume and maturity dates, and (ii) in a grand scale, each bank has an
ex post monopoly on its customers’ financial data – thus the barrier to
entry against entrants who do not have the information necessary to
form an adequate portfolio of active transactions.
this data gap between two economic agents that George Akerlof called information
asymmetry. In AKERLOF, George A. The Market for “Lemons”: quality uncertainty and
the market mechanism. Quarterly Journal of Economics, Vol. 84, Nº 3, Cambridge:
August 1970, pp. 488-500.
The practical result of the ex post information monopoly is
that “good” borrowers are likely to receive more advantageous credit
conditions from their home banks (risk-based pricing), while “bad”
borrowers will try to look for alternatives in the entrants323. This,
in short, is the phenomenon of adverse selection, also explained by
Akerlof as a consequence of informational asymmetry in his work on
the “lemon” market.
In short, we concur that the barrier to entry represented
by information asymmetry and (consequently) adverse selection are
so imposing because an entrant without data to calculate the risk of
default represented by its potential clients (i) will not have enough data
to distinguish “good” borrowers from “bad” borrowers as readily as
incumbent banks do; (ii) may be more sought after by “bad” borrowers
looking for loans from entrants because they did not achieve good
loan conditions from incumbent banks (which do have the data on
such bad borrowers); and (iii) in an adverse situation scenario, the
high number of defaults will increase the entrants’ operational costs
arising from defaults (the so-called “custos de inadimplência”, which
are partially responsible for the high-interest rates in the Brazilian
lending market)324.
As constantly evangelized by regulatory agencies and
economists all over the world, and as we ultimately defend in Chapter
4 of this paper, the best tool to reduce information asymmetry effects
323 See (1) the vote by Cade commissioner Cristiane Alkmin Junqueira Schmidt in the
Concentration Act nº 08700.02792/2016-47, which approved with restrictions the joint
venture of Itaú Unibanco, Bradesco, Santander, Banco do Brasil and CEF to establish
a credit bureau (called “Quod”), on November 14th, 2016; (2) the vote by Cade commis-
sioner César Mattos in Concentration Act nº 08012.011736/2008-41, which approved
the acquisition of Banco Nossa Caixa by Banco do Brasil on August 4th, 2010; and (3)
VIVES, Xavier. Competition and Stability in Banking: the role of Regulation and compe-
tition policy. Princeton: Princeton University Press, 2016, p. 75-76.
324 CENTRAL BANK OF BRAZIL. Report on Banking Economy 2018. Publish in May
2019, p. 80.
on credit markets would be to enforce sharing information from credit
institutions over their consumers325.
Though not exactly a new recommendation, the exchange
of financial information on customers is currently receiving relevant
technology boosts and regulatory incentives all over the world. From a
technological point of view, the ongoing “digitization” of credit markets
generates increasingly more standardized/marketable data, dubbed
hard information (as opposed to soft information, which is information
gained merely through relationship banking). From a regulatory point
of view, financial authorities may determine that incumbents must
share the financial data they have on their clients with competitors
through specific/secure channels, thus allowing credit institutions to
proactively compete for the best customers.
We believe that data sharing among lenders enforce the
competitive process and generates consumer welfare since it can foster
(i) erosion of the ex post monopoly of incumbents over their clients’
data, reducing the barrier to entry imposed by information asymmetry;
(ii) competition between the incumbent banks themselves, instigating
them to fight over the best clients they garnered over the decades; and
(iii) disciplinary effect over credit consumers, allowing for reductions
in the costs of default currently composing interest rates326. It has
been demonstrated that competition increases resulting from sharing
325 Entrants who have access to information sources on borrowers are able to com-
pete for the best clients of the incumbent banks, in addition to mitigating their port-
folio risks, transaction costs, and expanding the credit offer to low risk borrowers (in-
cluding individuals and SMEs) companies not fully served by the traditional banking
system. In Credit Scoring Knowledge Guide. International Finance Corporation (IFC),
2012, p. 5.
326 See (1) VIVES, Xavier. Competition and Stability in Banking: the role of Regulation
and competition policy. Princeton: Princeton University Press, 2016, p. 19; (2) OR-
NELAS; José Renato Haas, SILVA, Marcos Soares; VAN DOORNIK, Bernardus. Informa-
tional Switching Costs, Bank Competition and the Cost of Finance. Working Paper Series
nº 512. Brasília: Central Bank of Brazil, January 2020, p. 45; e (3) GOLDMAN SACHS.
Future of Finance Fintech’s Brazil Moment. Goldman Sachs Global Investment Research,
2017, p. 28.
of data entices reduction in countries’ interest rates327, not to mention
that, in the wake of the so-called “digital revolution”, data itself has
become one of the world’s most valuable resources328.
In this paper, we defend that the Central Bank of Brazil is the
best-suited authority to compel incumbent institutions to share their
data on financial consumers, thus stimulating competitive pressure
from fintech and other innovative entrants in the Brazilian lending
market.
327 GOLDMAN SACHS. Future of Finance Fintech’s Brazil Moment. Goldman Sachs Glob-
al Investment Research, 2017, p. 28.
328 THE ECONOMIST. The world’s most valuable resource is no longer oil, but data.
The Economist (May 6th, 2017).
329 YAZBEK, Otávio. Regulação do Mercado Financeiro e de Capitais. 1st ed. Rio de Janei-
strengthened in the wake of the financial crisis from 2008330. However,
this potentially flawed/superficial premise has been questioned over
the years by many an author, according to whom a concentrated and
low-competition banking environment is just as toxic from a financial
soundness point of view.
To take sides in this debate, one must first understand why
banking stability is (correctly) such a delicate matter and so worthy
of concern. The financial intermediation activity – i.e., lending
and deposits – generates two relevant impacts in macroeconomy
and monetary policy. First, the “banking multiplier” phenomenon –
banking activity stirs an effect equal to generating more currency
in the economy, since the banking institution receives a deposit and
generally lends it away (the bank ceases to possess said deposit), but,
at the same time, the depositor still holds the deposited value against
the bank with short-term liquidity331. In other words, such an amount
exists in duplicity – both as a liability before the depositor as an asset
against the borrower. One can see how that might generate a problem
if the depositor decides to withdraw his deposit (as we shall see below).
The second impact to monetary policy, derived from the first,
is the banks’ ability of “maturity transformation”. Deposits and loans do
not have matched maturity dates. Deposits generally represent short-
term debt or on-demand debt (like checking accounts), whereas loans
generally represent comparatively long-term debt332. The resulting
maturity mismatch, as it is called, renders depositary institutions
extremely vulnerable to liquidity shocks. As such, banks are fragile
by default and generate severe social costs if they ultimately fail333.
334 VIVES, Xavier. Competition and Stability in Banking: the role of Regulation and
competition policy. Princeton: Princeton University Press, 2016, p. 38.
335 Id, p. 39.
336 Nowadays, bank runs are no longer materialized by long lines of anxious depos-
itors in front of a bank agency, but rather the non-renovation of interbank deposits
or withdrawal of large amounts of funds by institutional investors In VIVES, Xavier.
Competition and Stability in Banking: the role of Regulation and competition policy, op.
cit., p. 106.
337 CARLETTI, Elena; SMOLENSKA, Agniezka. 10 years on from the Financial Crisis:
cooperation between competition agencies and regulators in the financial sector.
OCDE, Directorate for Financial and Enterprise Affairs, Competition Committee. Par-
is, OECD, 2017, p. 9.
one of these institutions suffers an intervention or liquidation, the
effects of this failure go well beyond the institution’s own private
sphere and reverberate throughout the financial web all the way to the
other end – especially to creditor banks (a contamination effect). Due
to the recent advancement of financial technology, such a problem
of interconnectedness has been exponentially aggravated by allowing
banks from different regions and countries to access each other with
progressively less operational and informational hurdles. All in all,
technology has elevated the risk of contagion by bank failures, since
now there are more transmission lines between financial institutions
than ever338.
Considering such propensity to failure, it is not surprising
that financial systems around the world have been the stage of
symptomatic and devastating crises. The most recent of those was the
crisis of 2008, which was originated in the United States from the (i)
negligent issuance of derivatives with underlying risky mortgages – the
so-called “subprimes”; (ii) agency ratings that appointed these assets
as low-risk and high grade, notwithstanding the elevated risks; (iii)
progressive deregulation in the financial system as a tool to stimulate
economic growth339; and (iv) politic incentives on both democrats and
republicans to, without duly caution, expand real estate credit as a
tool to win over voters340. After being born in the United States, the
crisis of 2008 reaped across most of the world, breaking some banks
(emblematic case of Lehman Brothers) and driving others to be bailed
out by taxpayer’s money.
After the destruction suffered by the global economy in the
crisis’ wake, the role of financial regulation has since returned to the
spotlight, reinvigorating prudential authorities as responsible for
338 See (1) YAZBEK, Otávio. Regulação do Mercado Financeiro e de Capitais, op. cit., p.
175; and (2) VIVES, Xavier. Competition and Stability in Banking: the role of Regulation
and competition policy. Princeton: Princeton University Press, 2016, p. 15.
339 GERDING, Erik F. Bank Regulation and Securitization: how the Law improved trans-
mission lines between real estate and banking crises. Georgia Law Review, vol. 50:1.
Atenas, University of Georgia, 2015.
340 VIVES, Xavier. Competition and Stability in Banking: the role of Regulation and
competition policy. Princeton: Princeton University Press, 2016, p. 17.
adjusting market failures in the highly-interconnected, technology-
driven financial systems – especially addressing the social cost
generated by failing banks (much like social costs derived from
environmental damages, nuclear industry and public health. In
other words, financial authorities have become (re)empowered to
(i) internalize social costs in the financial institutions themselves,
minimizing the risk of spillovers that damage depositors, creditors
and the local economy as a whole341; (ii) aligning the executives’
incentives to drive away excessive risk-taking; (iii) eliminating
the negative feedback loop between banking balance sheets and
economies (i.e., preventing banking crises from becoming economic
crises)342; and, specifically addressing the subprime problems from
2008, eliminating the transmission lines between bank markets and
real estate markets343.
341 YAZBEK, Otávio. Regulação do Mercado Financeiro e de Capitais, op. cit., p. 176.
342 CARLETTI, Elena; SMOLENSKA, Agniezka. 10 years on from the Financial Crisis:
cooperation between competition agencies and regulators in the financial sector.
OCDE, Directorate for Financial and Enterprise Affairs, Competition Committee. Par-
is, OECD, 2017, p. 7.
343 GERDING, Erik F. Bank Regulation and Securitization: how the Law improved trans-
mission lines between real estate and banking crises. Georgia Law Review, vol. 50:1.
Atenas, University of Georgia, 2015.
344 YAZBEK, Otávio. Regulação do Mercado Financeiro e de Capitais. 1st ed. Rio de Janei-
Notwithstanding such opinions, the reality is that is high
degrees of concentration and market power in the banking system are
also extremely prone to systemic risk345. Big banks with market power
may (i) become “too big to fail”, generating incentives for excessive risk-
taking due to the explicit and implicit guarantees by the government
(that they will be bailed out in case of a failure); (ii) build complex
structures that difficult monitoring and regulating; (iii) employ
market-based activities (those not reliant on loans and deposits, such
as trading) on a greater risk basis346. As such, the benefits brought on
by more competition in the financial sector should not be ignored
for prudential stability in itself, since the absence of competition is
just as capable of inviting systemic disaster. On the subject, Professor
Rory Van Loo347:
348 A few examples are in order. First, the Central Bank of Brazil’s own new, pro-com-
petitive stance, which is studied in Chapter 3.2. Second, in the United Kingdom, the
Financial Services and Markets Act, 2000 determines that the Prudential Regulation Au-
thority (PRA) must promote financial competition while performing its main regulato-
ry functions. In practice, this has been translated into policies from the English pru-
dential authority to reduce barriers to entry and design of proportional regimes (i.e.,
a program called “New Bank Startup Unit” designed to ease entrants into the financial
market). Third, reflecting the German central bank’s stance on the subject around
2019, a director from Deutsche Bundesbank publicly defended the benefits of competi-
tion and cooperation between incumbent and incoming banks in Germany, while also
stressing the need to ensure fair competition in the sector. See (1) BASEL COMMITTEE
ON BANKING SUPERVISION. Range of practice in the regulation and supervision of in-
stitutions relevant to financial inclusion. Bank for International Settlements (January
2015), p. 25; (2) BALZ, Burkhard. Fintech and bigtech firms and central banks – conflicting
interests or a common mission? German Embassy in Singapura, November 11st, 2019, p.
1; and (3) England’s Chapter 2, section 2.H, Financial Services and Markets Act, 2000.
349 BASEL COMMITTEE ON BANKING SUPERVISION. Range of practice in the regula-
tion and supervision of institutions relevant to financial inclusion. Bank for International
Settlements (January 2015), p. 25.
3.2. Why The Central Bank Of Brazil Should (continue To)
Promote Competition
350 See (1) JOAQUIM, Gustavo; VAN DOORNIK, Bernardus. Bank Competition, Cost
of Credit and Economic Activity: evidence from Brazil. Working Paper Series nº 508.
Brasília: Central Bank of Brazil, October 2019; e (2) MIAN, Atif; SUFI, Amir; VERN-
ER, Emil. How do Credit Supply Shocks Affect the Real Economy? Evidence from the
United States in the 1980s. National Bureau for Economic Research. Working Paper Nº
23802. Cambridge: NBER, 2017.
– all due to increased competition and innovation351. As a direct result
and example of benefits to consumers, Itaú Unibanco and Santander
have reduced to practically zero all fees charged from businesses that
were paid by costumers through credit and debit cards, showing how
the competitive pressure from fintechs and their technologies were
able to generate consumer welfare (even to the point of originating
claims of anti-competitive claims in Cade, which are quite outside the
scope of this paper)352.
Second, fintechs in the sector of foreign exchange/remittance
(cross-border) have social value essentially linked they are to the
migration of populaces around the globe. It was verified by the World
Bank that, when an immigrant community can receive larger volumes
of remittances from their home countries, it can achieve significant
socioeconomic consequences, such as increases in levels of health,
education, technological advancement, entrepreneurship, financial
inclusion, disaster recovery and reducing child labor353.
In 2011, with this in mind, the leading G20 countries agreed to
reduce transaction costs for international remittances, dedicating the
Global Partnership for Financial Inclusion (GPFI, G20 group) a few years
later to monitor countries’ progress in this regard354. In 2016, these
objectives were harmonized with the United Nations Agenda 2030,
which aims to reduce the average cost of international remittances to
3%, and a total limit of 5%355. In Brazil, the Central Bank is employing
efforts to carry out this agenda (i.e., Letter nº 3,914, dated 2018, which
356 As per answered by Central Bank to our request nº 18600000516201979 under the
Public Access to Information Law (the “Lei de Acesso à Informação”), on April 18th,
2019.
357 See the votes that approved the mergers of Bradesco/HSBC and Itaú/Citi by Cen-
tral Bank – both drafted by executive member Sidnei Corrêa Marques, respectively (i)
Vote n 263/2015-BCB, December 30th, 2015; and (ii) Vote nº 230/2017-BCB, October26th,
2017.
competition authority – especially since this is not the case in other
countries (i.e., United Kingdom).
However, we do defend that the Central Bank is better
institutionally positioned than Cade to foster competition in the
lending market. We are not referring to the historical conflict of
competence between both authorities (which has already been solved
through a Memorandum of Understanding signed in 2018)358, but to
an issue of institutional design (of “desenho institucional”). While the
Central Bank has the power to proactively regulate and adjust the
behavior of each economic agent operating in the financial activities
segment, Cade’s role is actually more reactive: (i) to approve or
reprove mergers that meet the minimum criteria set out in Law nº
12,529; (ii) to impose certain obligations in merger agreements that
can temporarily improve the market, (iii) to repress anticompetitive
conduct, and (iv) promote competition advocacy in other areas of
public policy.
When we evaluate the conjunction of these roles, it may be
argued that Cade is not as well institutionally positioned as the Central
Bank to promote competition in the Brazilian credit market. On the
other hand, the Central Bank may contribute to consumer welfare
in areas not fully protected by Cade, given its more reactive stance,
nor by Consumer Law itself, given its incapacity to tackle monopoly/
358 In 2001, a conflict rose between the Central Bank and Cade on which should ana-
lyze and approve mergers in the financial system. It was around this time that Brad-
esco acquired BCN, and both were fined by Cade for not seeking its approval for the
merger. This fine was suspended by the Superior Court of Justice on the basis that only
the Central Bank was in a position to approve or reprove bank mergers, as supported
by an opinion from the Federal Attorney General’s Office. Cade appealed to the Brazil-
ian Supreme Court (Extraordinary Appeal nº 664,189). The lawsuit waged on until the
dispute was put to rest by a Memorandum of Understanding signed between the two
entities on February 28th, 2018 to settle this historical feud. It was agreed upon that
the parties of any future merger in the financial system shall be required to seek prior
approval from both Central Bank and Cade. The Memorandum of Understanding was
signed by borth parties on February 28th, 2018, and is available at: <https://www.bcb.
gov.br/conteudo/home-ptbr/TextosApresentacoes/memorando_cade_bc_28022018.
pdf>.
oligopoly price distortions (which may only be addressed/countered
through competition policy359.
359 FORGIONI, Paula. Fundamentos do Antitruste. 10ª ed. São Paulo: Revista dos Tribu-
nais, 2018, p. 257.
360 The guiding principle of segmentation is proportionality – applying more flexi-
ble rules for entry and allocation of capital to smaller entrants who do not present
systemic risk. All over the world, financial authorities are employing regulation to
ease entrants into the financial markets, many of which might serve as benchmarks
for the Central Bank in its efforts (some already do). Switzerland, Japan and Sweden
(along with Brazil, as per below) set up categories to fit financial institutions with
ranges based on size (as an example: bank revenue between R$ 500 million and R$
700 million falls into category A, while bank with revenues between R$ 700 million
and R$ 900 million falls into category B). The larger the size, the more minimum re-
quirements the institution needs to fulfill in order to establish itself and remain in
operation. Creating a simplified version of this rule, the United States, the European
Union, and Hong Kong began to relax the rules for allocating capital to institutions
that did not reach a certain size. In 2017, based on the proportionality principle set
forth above, Central Bank spurred the National Monetary Council to issue Resolution
4,553 to establish five different categories in which local financial conglomerates are
distributed – segments S1, S2, S3, S4 and S5. The more robust the conglomerate is in
terms of equity, systemic relevance and international performance, the closer it is to
the S1 segment (in which the five largest Brazilian banks are). In line with the spirit of
proportionality, the very preamble to Resolution Nº 4,553 states that segmentation is
intended for the “proportional application of prudential regulation”. Even Febraban (an
association commanded by the five largest banks) recognized the potential to gen-
erate more competition from this standard – even though it opposed the new rule,
and regulatory licenses for credit fintechs. All of these measures were
effectively implemented by the Central Bank (and National Monetary
Council) between 2017 and 2018.
This last item was one of the most praised feats from the
Central Bank of Brazil concerning competition. In most countries,
credit lenders must seek a regulatory license before they can start
the actual lending. Both in Brazil and many countries abroad,
the regulator’s legal frameworks are not (or were not) capable of
accommodating fintechs and their digital credit business model that
does not rely on robust physical/economic structures – which has
repelled startups from entering into financial markets until well
later into the “digital revolution”361. If credit fintechs tried to enter
the Brazilian credit market in noncompliance with the minimum
requirements required by Central Bank, they would be exposed to
several risks362.
364 BAKER, Todd. Simpler is better for fintechs breaking into banking. American Bank-
er (February 20th, 2020).
365 In the meantime, according to fintech class associations (ABCD, ABIPAG and AB-
Fintechs) in a statement to Central Bank, most credit fintechs still operate in the old
banking correspondent structure. In ABCD; ABIPAG; ABFINTECHS. Comentários ao
Edital de Consulta Pública nº 73/2019 – proposal of open banking implementation (Jan-
uary 31st, 2020), p. 11.
366 CENTRAL BANK OF BRAZIL. Public hearing nº 48/2019 of the Economic Matters
credit fintechs puts Brazil in front of many other countries that are
still discussing the idea367.
By 2019, the Central Bank updated its objectives in “Agenda
BC+”, while also renaming it to “Agenda BC#”, presumably to reflect the
affinity with technology and innovation (especially data and artificial
intelligence), and overall maintaining an apparently pro-competition
spirit. Additionally, over the last 10 years, it has also actively intervened
in the credit markets on several occasions with the express purpose
of correcting market failures that free competition alone was not able
to correct (i.e., limited rationality in the overdraft market, exclusivity
contracts, elevated switching costs and lock-in effects).
As a result, nowadays, the Central Bank’s proactive stance
towards fostering competition is openly recognized by officers from
Commission of the Federal Senate. Institutional presentation from the Central Bank
of Brazil, p. 43.
367 In the United States, the regulatory measures to give out licenses to fintechs are
quite late and tangled. In 2018, the federal financial agency Office of the Comptroller
of Currency (“OCC”) declared that it would start accepting applications for fintechs to
become financial institutions without having deposit insurance granted to them by
the Federal Deposit Insurance Corp (“FDIC”). However, the OCC was subject to a law-
suit by a state financial agency, the New York State Department of Financial Services,
on the grounds that no license could be granted to financial institutions that had no
deposit insurance, as per the National Banking Act, 1863 – and the dispute goes on to
this day. In parallel to this regulatory uncertainty suffered by American fintechs, only
two are managing to claw their way out of the gray zone: Varo Money, which has actu-
ally managed to obtain a deposit insurance from the FDIC; and LendingClub, which
has acquired a bank for itself (and, thus, a license to call its own). See (1) COWLEY,
Stacy. Online Lenders and Payment Companies get a way to act more like Banks. New
York Times (July 31st, 2018); (2) CARLETTI, Elena; SMOLENSKA, Agniezka. 10 years on
from the Financial Crisis: cooperation between competition agencies and regulators
in the financial sector, op. cit., p. 19; (3) PEDERSEN, Brandon. OCC files appeal in fin-
tech charter case. American Banker (December 19th, 2019); (4) MCCAFFREY, Orla. Varo
Moves Closer to Becoming a Bank: FDIC approves application to provide insured de-
posits. Wall Street Journal (February 10th, 2020); and (5) BAKER, Todd. Simpler is better
for fintechs breaking into banking. American Banker (February 20th, 2020).
Cade368, authorities from abroad369, specialized press370, and many
a public statement by the Central Bank’s executive board. In 2019,
its Officer of Organization of the Financial System and Resolution
publicly pointed out four practical measures necessary to increase
banking competition in Brazil371: (i) legal certainty in the recovery of
guarantees and less information asymmetry (i.e., positive record);
(ii) greater vigilance against anticompetitive conduct; (iii) encourage
the entry of new competitors; and (iv) “ ensuring that the market takes
advantage of the huge pro-competitive opportunities that technological
advancement [brings] ”.
368 “In fact, considering other banks, it is worth saying that (...) the regulator has been
acting more forcefully, namely, the Central Bank of Brazil (BCB), aiming to resolve the
various market failures” (free translation). In Vote by Cade commissioner Cristiane
Alkmin Junqueira Schmidt in the Concentration Act nº 08700.004431/2017-16, which
voted to forbid the purchase, by Itaú Unibanco, of a minor shareholder position in XP
Investimentos, on March 14th, 2018.
369 “The pro-active approach of stimulating new laws and cooperation with other countries
by the Brazilian Central Bank could potentially help the fintechs to grow and change the
Brazilian market as a result of more competition”. In SAGOENIE, Yashini, SMITS; Petra,
BAKKER, Ernst-Jan. Fintech in Brazil. Ministry of Economic Affairs and Climate Policy
of the Netherlands. Hague: Netherlands Enterprise Agency, February 2019, p. 5.
370 GRAY, Kevin, Brazil’s central bank policies encourage fintech startups. LatinFinan-
ce (March 28, 2019).
371 João Manoel Pinho de Mello defende maior concorrência no setor bancário. Cor-
reio Braziliense (February 26th, 2019).
initiatives to stimulate sharing of data sharing between financial
institutions.
First, the empowerment of credit scoring bureaus, which is still
how most countries still operationalize the sharing of financial hard
information. We refer to agents specialized in storing and sharing
data on financial transactions between consumers and credit market
institutions (and sometimes of other markets, such as public utilities).
Each country has its own microsystem of credit information, hoarded
in databases and sustained by a robust institutional design meant to
maximize efficiency (from both technological and legal point of view).
The very first credit bureaus date back to 19th century England, but
true worldwide consolidation of credit scoring systems would only be
technologically allowed after the second half of the 20th century372. In
Brazil, the positive credit score bureaus (the “cadastro positivo”) were
recently reformed by Complementary Law nº 166 and Resolution nº
4,737.
Second, the regulatory design of an open banking initiative
– the sharing and leveraging of customer-permissioned data by
banks with third-party developers and firms to build applications
and services, such as those that provide real-time payments, greater
financial transparency options for account holders, and marketing
and cross-selling opportunities373. According to recent studies by
Ornelas, Silva, and Van Doornik, the use of open banking to stimulate
data sharing may actively improve the local lending market (Working
Paper Series of the Central Bank of Brazil)374:
372 Credit Scoring Knowledge Guide. International Finance Corporation (IFC), 2012, p.
5.
373 BASEL COMMITTEE ON BANKING SUPERVISION. Report on open banking and ap-
plication programming interfaces. Bank for International Settlements, November 2019,
p. 19.
374 ORNELAS; José Renato Haas, SILVA, Marcos Soares; VAN DOORNIK, Bernardus
Ferdinandus Nazar. Informational Switching Costs, Bank Competition and the Cost of Fi-
nance. Working Paper Series nº 512. Brasília: Central Bank of Brazil, January 2020, p.
45.
[Policy] responses related to foster information sharing
may help to decrease switching costs and alleviate the
holdup problem. Open banking initiatives can make
information held by incumbent banks to flow towards
other financial institutions so that firms can get better
interest rates from outside banks, thus enhancing
competition. Another policy initiative is to reduce entry
barriers to new competitors, like the credit fintechs. These
institutions usually have a transactional lending approach,
instead of relationship banking, so that an open banking
initiative can enhance their ability to obtain information
about firms and provide better loan conditions.
375 The basis for the English open banking system was the famous Payment Services
Directive 2 (PSD2), the European Union’s regulation that aims to encourage data shar-
ing on the payment sector (and not credit). The United Kingdom adopted the PSD2’s
pro-competition premise and took it one step further – mandating a system that may
stimulate competition across all financial segments. In 2016, the English competi-
tion agency called Competitions and Market Authority (CMA) began structuring the
guidelines for an open banking system to compel financial institutions to share their
consumers’ registration and transactional data with each other. The duty to build in-
frastructure, publish operational regulations, draft guidelines and the like was fulfil-
led by the Open Banking Implementation Entity Company (constituted by the CMA
for this specific purpose). The company’s official corporate purpose was to “design the
specifications for the APIs; support regulated third party providers and banks to use Open
Banking standards; create security and messaging standards; manage the Open Banking
Directory; produce guidelines for participants in the Open Banking ecosystem; set out the
process for managing disputes and complaints”. By 2018, the English open banking sys-
tem became operational. See (1) FINANCIAL STABILITY BOARD. Fintech and Market
structure in financial services: market developments and potential financial stability
implications (published in February 14th, 2017), p. 8-9; e (2) CARMONA, Alberto; LOM-
BARDO, Agustín; PASTOR, Rafael; QUIRÓS, Carlota; GARCÍA, Juan; MUÑOZ, David;
MARTÍN, Luis. Competition issues in the area of financial technology (FinTech). European
with express intent to impose it. Currently, jurisdictions are divided
on the subject between (i) jurisdictions that determine financial
institutions to share their data via open banking, specifically the
European Union, India, Mexico, South Africa, Thailand, and, as of
now, Brazil; (ii) jurisdictions that encourage financial institutions to
share their data via open banking, specifically Hong Kong, Korea and
Singapore; (iii) countries that leave sharing at the sole discretion of
each institution, specifically the United States, China and Argentina;
and (iv) jurisdictions that are still defining open banking rules,
specifically Australia, Russia, Turkey and Canada376.
Considering the relevance of data to financial competition as
studied in Chapter 2.1, and as endorsed by Professors Sérgio Werlang377
and José Scheinkman378, there is little doubt that open banking holds
high potential to improve the competitive process in the Brazilian
lending market, and, consequently, reduce the current interest rates
and lending spread, much like CMA expects the United Kingdom’s
open banking system to knock 30% off from English banking spread.
Indeed, it was the competition agency CMA the responsible
authority for coordinating the implementation of English open
banking – an intuitive premise, considering how the initiative is pro-
competitive. However, in Brazil, it is not Cade that is coordinating the
implementation of open banking, but the Central Bank itself – further
confirming the opinion we defend in this paper about how, in the
Parliament. Policy Department for Economic, Scientific and Quality of Life Policies.
Dictorate-General for Internal Policies, 2018, p. 73.
376 BASEL COMMITTEE ON BANKING SUPERVISION. Report on open banking and ap-
plication programming interfaces. Bank for International Settlements, November 2019,
p. 19.
377 Lecture by Sérgio Werlang, ex-executive of Economic Policy in Central Bank of
Brazil, ex-general executive in Itaú Unibanco and professor at Fundação Getúlio Var-
gas, in the event “Fintechs e Blockchain: oportunidades para os mercados financeiros”, or-
ganized by FGV EPGE on November 9th, 2019.
378 Lecture by José Scheinkman, professor at Columbia University, at the event “Com-
petição e Inclusão Financeira”, organized by Instituto ProPague on August 14th, 2019.
Brazilian institutional design, the Central Bank is better positioned
than Cade to promote competition in the credit market.
In practice, the construction of Brazilian open banking
started in 2019, when Central Bank issued Statement nº 33,455. From
the start, this preliminary set of rules already determine that the
largest financial institutions (namely S1 and S2 categories) shall be
forced to share their data in the open banking systems as soon as they
are implemented. The Central Bank’s statement seems to address the
concern whether large banks would have an incentive to willingly
share their precious data on costumers (thus renouncing their ex post
monopoly on such data).
After Statement nº 33,455, the Central Bank issued Public
Consultation nº 73, 2019 with an initial proposal of regulatory structure
surrounding open banking. After lengthy discussions with the market
on how to best implant and regulate it thoroughly, the final rules were
finally determined by Joint Resolution nº 1, dated May 2020, issued
by both the Central Bank and the National Monetary Council. The
final version of the regulation appointed that Brazilian open banking
would mandate financial institutions to share the following categories
of data:
I - data on:
a) service channels related to: 1. applied facilities; 2.
correspondent in the country; 3. electronic channels; and
4. other channels available to customers;
b) products and services related to: 1. demand deposit
accounts; 2. deposit accounts; 3. prepaid payment accounts;
4. postpaid payment accounts; 5. credit operations; 6.
foreign exchange transactions; 7. accreditation services
for payment agreements; 8. time deposit accounts and
other products of an investment nature; 9. insurance; and
10. open supplementary pension;
c) registration of customers and their representatives;
and d) customer taxes related to: 1. demand deposit
accounts; 2. deposit accounts; 3. prepaid payment
accounts; 4. postpaid payment accounts; 5. credit
operations; 6. registration and control account referred to
in Resolution Nº 3,402, of September 6, 2006; 7. foreign
exchange transactions; 8. accreditation services for
payment agreements; time deposit accounts and other
products of an investment nature; 10. insurance; 11. open
supplementary pension; and
II - services of:
a) initiation of payment transactions; and
b) submission of a credit operation proposal.
Conclusions
References
Introduction379
379 This article counted on the collaboration of Eric Worbetz in the translation.
380 CARDOSO, Renato César... [et. al.]. Livre-arbítrio: uma abordagem interdisciplinar.
Belo Horizonte: Ed. Artesã, 2017. p. 8. Appeals to multi, inter, trans, pluri or post-dis-
ciplinarity have become commonplace in today’s academic discourses. Discussing the
definitions, hierarchies, limits and possibilities of each of these concepts here would
escape the modest purposes of this preface. However, it must be stressed that it is not
enough, in order to really move in the desired direction, to group specialists from
different areas around a work table and wait for it to arise, as if by spontaneous gen-
eration, legitimate non-discipline. What is usually seen in such efforts is that, despite
being very well intentioned, they fail to escape the pitfalls they encounter along the
way, usually resulting from the specific and inalienable scientific training of each of
the participants.
to the whole society, new forms of interpretation and regulation of the
law.
The problematization presented by Curtis EA Karnow, Jason
Millar and Ian Kerr381, of the unpredictability of means adopted by
algorithms to perform certain tasks, which he calls autonomous
robots, whether in genetic algorithms, neural networks or other types
of feedback cycles that generate unpredictable behavior, even if fed
with data that would not imply such a result, is in the sense that Civil
Liability, whether objective or subjective, has as a common element
the predictability of damage, which makes its application to self-
learning Artificial Intelligence difficult.
We intend to present with this article an alternative (solution)
for the identification of liability for damages resulting from behaviors
emerging from autonomous self-learning robots. In this research,
we will try to understand (i) what artificial intelligence and machine
learning consists of; (ii) the possibility of emergent, unpredictable
behavior when feeding data or processing data; (iii) if there are
damages caused by emergent, unpredictable behavior of the machine,
submit them to civil liability, in the way put in the doctrine and
legislation, proposing, if feasible, mechanisms of accountability for
such unpredictable damages.
In the first part of this article, we will outline brief
considerations on Artificial Intelligence and Machine Learning
(Machine Learning), considering that the first chapters of this work
do it in a more technical and didactic way, which will give us more
attention to the most objective points of unpredictability of damages
arising from self-learning mechanisms.
Then, we will analyze the general aspects and classification
of civil liability regarding the taxable event (contractual, non-
contractual, pre- and post-contractual) and its assumptions (anti-
381 CALO, Ryan; FROOMKIN, A. Michael; KERR, Ian. Robot Law. Northhampton: El-
gar, 2016.
legality and experienced damage). We will abstain from the analysis
of those excluded from civil liability since the objective of the present
work is to identify the configuration of civil liability in unpredictable
facts, and not to abstain from liability when they occur.
The third part of the article is aimed at analyzing the
application of civil liability to unforeseeable damages arising from
self-learning mechanisms, under two approaches: does civil liability
apply only to foreseeable damages or can it also be applied (mitigated)
to those unpredictable? If the impossibility of civil liability for
unforeseeable damages is confirmed, are we allowed to think of a
resolvable civil liability, under a resolutive condition, subordinated to
the advent of a term, analogous to resolvable property? This will be
the fourth part of this work. The result of this investigation will be
presented as the conclusion of the article.
382 KAPLAN, Jerry. Artificial Intelligence: What everyone needs to know. Oxford: Oxford
University Press, 2016. p.1.
383 RUSSELL, Stuart J.; NORVIG, Peter. Artificial Intelligence: A Modern Approach. 3.
ed. New Jersey: Prentice-Hall, 2010.
consensus on the definition of AI as agents capable of performing
actions384 or generating information without human intervention in
processing, from external data.
We follow the concept of “acting humanly”, which is at the heart
of the Turing Test, where “natural language processing, knowledge
representation, automated reasoning and machine learning” are
measured. To these were added “computer vision to receive objects”385
and “robotics to manipulate objects” 8. Of the 6 disciplines proposed
by the authors, we will stick to Artificial Intelligence as a robot and
machine learning (machine learning), considering that the object of
the study is the responsibility for damages resulting from the emergent
(unpredictable) behavior of these robots.
Conceptualizing or saying what a robot is not less difficult
than doing it with Artificial Intelligence, here the lack of consensus
in definitions persists. What cannot be discussed is that both aim to
“act humanly” if not substituting themselves, at least helping with
mechanical and cognitive tasks. The attribution of the term robot to
the machine with these characteristics comes from a Czech theater
play by Karel Capek that deals with artificial human beings who
perform slave labor in a factory. It is, therefore, an agent or system
without biological life that performs physical386 and mental activities,
384 RUSSELL, Stuart J.; NORVIG, Peter. Artificial Intelligence: A Modern Approach.
3. ed. New Jersey: Prentice-Hall, 2010. p. viii. “The main unifying theme is the idea
of an intelligent agent. We define AI as the study of agents that receive percepts from
the environment and perform actions. Each such agent implements a function that
maps percept sequences to actions, and we cover different ways to represent these
functions, such as reactive agents, real-time planners, and decision-theoretic systems.
We explain the role of learning as extending the reach of the designer into unknown
environments, and we show how that role constrains agent design, favoring explicit
knowledge representation and reasoning. We treat robotics and vision not as inde-
pendently defined problems, but as occurring in the service of achieving goals. We
stress the importance of the task environment in determining the appropriate agent
design.”
385 RUSSELL, Stuart J.; NORVIG, Peter. Artificial Intelligence: A Modern Approach. 3.
ed. New Jersey: Prentice-Hall, 2010.
386 CALO, Ryan; FROOMKIN, A. Michael; KERR, Ian. Robot Law. Northhampton: El-
excluding software here, as it does not have this (mechanical) mobility
characteristic.
Robots are classified as autonomous, semi-autonomous and
non-autonomous. The freelancers still have little use in everyday life,
the emblematic example being the Roomba vacuum cleaner that,
without any human intervention, leaves its charging base, performs the
cleaning task throughout the house and returns to its base for energy
recharging. (drums). Semi-autonomous and non-autonomous are
those with partial or total human intervention in the execution of the
tasks for which they were developed, with the cognitive characteristic
remaining in them.
This cognition (learning) takes place through data external
to the agents (machines) that absorb, process and generate results,
referred to in the literature as machine learning. Among the various
forms of machine learning, we will stick to that of unsupervised self-
learning (feedback), because, as the name says, the agent is bound by
human intervention in the search for results even if pre-established,
but with unpredictable intelligence (processing). It is called “Strong
AI”387.
In contrast to the description given by and the benefits of non-
supervision in machine learning, a concern arises from this machine
behavior without supervision of the processing means, that of the
damages that may occur from this autonomy and the consequent
liability for such damages. Are the means adopted by autonomous
robots unpredictable, or is the unpredictability of damage? This is
what we will try to answer in the next topic. We consider that only
after the conclusion of the unpredictability will we be able to work the
Civil Liability in the damages resulting from the emergent behavior of
autonomous robots.
388 VENOSA, Silvio de Salvo. Direito Civil: responsabilidade civil. 9. ed. – São Paulo:
Atlas, 2009. p. 1. In principle, any activity that causes a loss generates responsibility
or duty to indemnify. There will sometimes be exclusions that prevent compensation,
as we will see. The term liability is used in any situation in which any person, whether
natural or legal, must bear the consequences of a harmful act, fact or business. Under
this notion, all human activity, therefore, can carry the duty to indemnify. According-
ly, the civil liability statute covers the entire set of principles and rules that govern the
obligation to indemnify.
389 CALO, Ryan; FROOMKIN, A. Michael; KERR, Ian. Robot Law. Northhampton: El-
gar, 2016. p. 53 “Interesting robots, for purposes of this chapter, are those that are not
simply autonomous in the sense of not being under real-time control of a human, but
autonomous in the sense that the methods selected by the robots to accomplish the
human-generated goal are not predicable by the human.”
quickly, so to speak, to solve the assigned task”. Amid the environment
and information in which it is submitted, the robot creates mechanisms,
options, strategies, responses, that is, it develops the means necessary
to achieve a certain purpose. In this behavior lies the autonomy and
the greater the ability to reorganize processing and responses, the
greater this autonomy will be, which is also equivalent to your “IQ”.
The environment in which the robot interacts is a determinant of
its processing capacity (IQ) because of the more unpredictable, the
greater the need for intelligent responses390.
In summary, we consider that unpredictability in unsupervised
self-learning robots is a fundamental characteristic, a reason for its
creation to give it autonomy in decision making. Roomba was created
for autonomous decision making, albeit for a simple cleaning task. Its
decision-making autonomy is the reason why consumers acquire it,
avoiding a domestic task. In the same sense is the adoption of a robot
to assist customers in a pharmacy, which performs all tasks without
any human intervention391. If such a machine had the predictability
390 Karnov in CALO, Ryan; FROOMKIN, A. Michael; KERR, Ian. Robot Law. North-
hampton: Elgar, 2016. “The line between robot and environment, though, is in the
abstract as vague as the line between one program and another. It is a matter of conve-
nience and convention where we draw the line between a program and its “external”
constraints, between the program, on the one hand, and the sources of inputs and
destination of output, on the other. Modules make up modules, not just in software but
for recombinant modular robots as well. (One man’s program is another’s subroutine.)
In the purely software context, we have a system made up of a group of algorithms.26
In this way we might distinguish the system from “external” sources of input or direc-
tions for output. But it is also true that modules, neurons, and subroutines interact,
each acting as input and output to others. The larger and more complex the system,
the more likely it is to have the tools to solve a problem and the more likely it is that
one might call it “intelligent.”
391 HARARI, Yuval Noah. Homo Deus: Uma breve história do amanhã. Tradução: Pau-
lo Geiger. São Paulo: Companhia das Letras, 2016. p. 225. And what is valid for doctors
is doubly valid for pharmacists. The pharmacy operated by a robot in the United States
that serves customers reading their prescriptions has opened in San Francisco a phar-
macy operated by a single robot. When a human goes to that pharmacy, in seconds the
robot receives all the customer’s prescriptions, as well as detailed information about
other medications he takes and about his allergies. The robot makes sure that the new
recipes do not cause any adverse reactions if combined with any other medication
of processing and actions as a characteristic, its service would be
restricted to specific customers and products, excluding a large part of
customers in search of medicines.
The unpredictability in our understanding, therefore, lies in
the behavior of the machine (robot) and not in the damage resulting
from the autonomy that is conferred to it. Notwithstanding the
conclusion of the association between the duty of reparation imposed
by the civil liability institute and the predictability of damages, we
consider the analysis restricted to prior knowledge of the damage to
be wrong. This is because civil liability has its regulatory element in
the conduct, the damage being a consequence of the criminal conduct
(subjective) or inherent in some activity (objective)392.
We also disagree that the problems (damages) resulting from
autonomous robots do not follow the linearity of social and legal
relations between humans, that is, that they occur in an innovative
way, unknown in the current legal universe, regardless of their
origin. Aquilian civil liability, for example, has its origin in Roman
law, persisting in our order, regardless of the current technological
standard.
In other words, the concern with the emergent behavior of
autonomous robots must be focused not on the predictability of the
damages that come, but on the form of accountability when they occur.
Is the existing civil liability sufficient to determine who is responsible
or with any allergies and finally supplies the customer with the required drug. In the
first year of operation, the robotic pharmacist served 2 million prescriptions, without
making a single mistake. On average, meat and bone pharmacists made mistakes in
1.7% of revenues. In the United States alone, this represents more than 50 million
revenue errors annually!
392 JUNIOR, Tercio Sampaio Ferraz. Introdução ao estudo do direito: técnica, decisão,
dominação. 5. ed. – São Paulo: Atlas, 2007. p. 163. When it comes to responsibility,
there is an important notion whose interest in law is growing. There are cases in
which it gains a certain independence from the subject of the obligation in the sense
that the subjective bond does not count. That is, someone takes responsibility not be-
cause he is bound by his actions (subjective responsibility), but because of a risk that
emerges from a situation.
for the duty of reparation? When it comes to continuous learning,
should the developer or programmer be held jointly or severally
responsible? This is what we intend in the next section.
Civil Responsibility393
The Brazilian Civil Code did not systematically deal with Civil
Liability, however, regulating the duty of reparation in several legal
relationships, such as non-contractual liability (arts. 186 and 187),
liability of the legal entity under public law (art. 43), damages arising
from contractual arrears (arts. 393 to 401), obligations (arts. 389 to 401),
losses and damages (arts. 402 to 405), exception of unfulfilled contract
393 The structuring of this chapter follows those presented by professors Everaldo
Augusto Cambler in (CAMBLER, 2015) and Silvio de Salvo Venosa (VENOSA, Civil Law:
general part, 2009) due to the didactics and language adopted in the works.
394 CAMBLER, Everaldo Augusto. Responsabilidade Civil na Incorporação Imo-
biliária. 2. ed. – São Paulo: Editora Revista dos Tribunais, 2014. p. 97-100. “So essential
to life in society is the principle of responsibility that we can find it in the legal order
of all politically organized peoples, imposing on those who cause harm to others the
duty to repair. Whenever a value recognized by the law (legal good) is harassed, we
resort to the legal order, seeking in it to find a sufficient protective mechanism against
the reaction against the harmful fact. Practiced an act in disagreement with the du-
ties that were inherent to him, the agent is reached by the social reaction against the
damage caused.”
(arts. 474 to 477), indemnity (arts. 927 943 and 944 to 954), and more.
Exceptions to the rule, there are still cases of indemnity duty that are
not properly linked to damage or violation of the law, also provided for
in the Civil Code, as in the case of the requirement of the conventional
penalty (art. 416), assumption of risk of the insurer in policy issuance
(art. 773), repetition of undue payment (art. 940), sanction to the heir
who withholds property (CC. Art. 1.992), fine to the developer who
breaches the law by not entering into a contract with buyers (Law
4.591 / 64, art. 35, § 5), among others395.
The determination of Civil Liability is conditioned to the
losses and damages caused or experienced by the injured party.
Therefore, it is necessary to assess its origin, or, in a better analysis,
its classification as to the taxable event: contractual, non-contractual,
pre and post-contractual, the latter two not addressed in this article,
since beyond the limits and purposes of investigation.
Contractual Civil Liability stems from rules and sanctions
pre-established in an agreement, as the name suggests. There is,
therefore, a direct legal bond that, once violated, generates the duty
of reparation to the injured party. Characteristics of this responsibility
are non-compliance and irregularities in the way or time of
compliance. As it is a contractual obligation, the contact must have
its objective requirements configured, that is, the existence, validity,
non-compliance and has caused losses. If the contract exists and is
valid, the parties must honor it (obligation) and, once breached and
injured the other contracting party, a new duty is born, that is, a new
bond between the parties (reparation)396. The verification of damage
397 VENOSA, Silvio de Salvo. Direito Civil: parte geral. 9. ed. – São Paulo: Atlas, 2009.
p. 523-524.
398 CAMBLER, Everaldo Augusto. Responsabilidade Civil na Incorporação Imo-
biliária. 2. ed. – São Paulo: Editora Revista dos Tribunais, 2014. p. 121-122.
399 VENOSA, Silvio de Salvo. Direito Civil: responsabilidade civil.. 12. ed. – São Paulo:
Atlas, 2012. p. 26. Civil guilt in the broad sense encompasses not only the intention-
al act or conduct, the deceit (crime, in Roman semantic and historical origin), but
also the acts or conduct riddled with negligence, imprudence and malpractice, that
is, guilt in the strict sense (quasi-crime). This distinction between intent and guilt was
known in Roman law, and was thus maintained in the French Code and in many other
laws, such as crimes and quasi-crimes. This distinction, modernly, is no longer im-
portant in the field of responsibility. For indemnity purposes, it is important to check
if the agent acted with civil guilt, in a broad sense, since, as a rule, the intensity of the
intent or guilt should not scale the amount of the indemnity, although the present
Code presents a provision in this sense (art. 944, sole paragraph). The indemnity must
be marked by the actual loss.”
The nature of the legal relationship adds two aspects to Civil
Liability. The subjective, as a rule in our law, analyzes the damage
based on the guilt of the causative agent so that the latter falls to the
duty to indemnify. The Objective Civil Liability stems from the duty
of indemnity as stated above, excluding, however, the verification of
the agent’s guilt and the causal link. It arises from a qualified risk,
from the exploitation of a specific activity that can potentially cause
damage400, even if certain precautions are taken401.
In general, these are the considerations about Civil Liability
that we consider relevant to the proposal of this article in the
possibility of liability for damages resulting from the behavior of
autonomous robots, with self-learning (machine learning). We ask
readers (professors) for the synthesis of a topic so vast and dear to the
science of law402. If Civil Liability were exploited here with the rigor
and fidelity due to it, we would not be allowed to approach artificial
intelligence as intended.
The Civil Liability in force in our legal doctrine and order will
not find greater difficulties in making agents liable for the damages
400 Civil Code. Article 927 [...] Single paragraph. There will be an obligation to repair
the damage, regardless of fault, in the cases specified by law, or when the activity
normally carried out by the author of the damage implies, by its nature, a risk to the
rights of others.
401 CAMBLER, Everaldo Augusto. Responsabilidade Civil na Incorporação Imo-
biliária. 2. ed. – São Paulo: Editora Revista dos Tribunais, 2014. p. 155-156.
402 VENOSA, Silvio de Salvo. Direito Civil: responsabilidade civil.. 12. ed. – São Pau-
lo: Atlas, 2012. p. vii-ix. Professor Venosa dedicates 10 chapters to the study of Civil
Liability, in addition to the historical aspects that are characteristic in his works and
the concepts, classification, characteristics, excluding, the following civil responsibil-
ities are addressed: Due to someone else (direct and indirect, from parents for minor
children, tutors and curators, the employer, hotel owners and the like, educational
establishments, for the benefit of crime, legal entities under public and private law,
the magistrate ...).
caused by autonomous robots. The same cannot be said when such
damage results from emergent behavior, that is, the machine within
its autonomy of learning or self-learning.
Despite the civil liability studies for autonomous robots such
as those explored in the first chapter, the difficulty of accountability, in
our opinion, does not lie in the unpredictability of the damage or the
emergent behavior of the robot, but of who will be held responsible
for such damage. The causal relationship can be affected by the
implementation of robots in society, but not changed. The damage
generated will affect people or things, but will not change, at least in
the current state of the art, the patrimonial, moral, administrative or
criminal sphere. We venture to affirm that unpredictable damage does
not demand a new responsibility, but a restructuring of the systems, of
which civil liability is an adjunct.
The liability for such damages will be hindered by the
identification of the agent since the legal relations themselves are
under development and the exploitation of technology does not stick
to the current mode of production and social relations. If the robot
causes damage due to its inherent processing autonomy, even if it is
good and not a corporeal one, there is no dispute that it is the cause. If
this is the cause of the damage, it is not a center for imputing rights and
duties, to whom is responsible for such damages? Being a marketed
product, would the manufacturer or developer be responsible? Such
reasoning does not seem prudent to us, since self-learning and its
consequent decision making are directly linked to the environment in
which it is exposed, with the people it interacts with, even with other
technologies present there.
We believe that subjective or objective civil liability,
contractual or non-contractual, the identification of guilt or intent
are sufficient for the damages resulting from this emerging behavior,
demanding, however, a condition of displacement of the verification
of responsibility, which we risk calling Civil Liability Resoluble.
Responsible Civil Resolution
403 Civil Code. CHAPTER III. Condition, Term and Charge. Art. 127. If the condition
is resolved, while it is not fulfilled, the legal transaction will be in force, and the right
established by it may be exercised since the conclusion of this.
404 VENOSA, Silvio de Salvo. Direito Civil: direitos reais. 12. ed. – São Paulo: Atlas,
2012. p. 392. Note that the resolvable owner exercises the powers of full owner: use,
enjoy and even dispose of the thing. Unavailability will only occur if the constitutive
act contains an inalienability clause. Under such aegis, even if the thing is alienated,
the implementation of the condition or advent of the term, which have the seed sown
at the origin of this modality, of domain, authorizes the claim by the new owner, in
the exercise of his right of sequel. Thus, third parties who acquire property subject to
term and resolving condition assume the risk of losing it. As the Condition is fallible,
this possibility of loss is not, as can be seen, inexorable.
405 BRAZIL. Law no. 9,514, of November 20, 1997. Available at http://www.planalto.
gov.br/ccivil_03/Leis/l9514.htm. Accessed on: 30 jun. 2018. CHAPTER II. Of the Fidu-
ciary Alienation of Coisa Immovable. Art. 22. Fiduciary alienation regulated by this
Law is the legal business by which the debtor, or fiduciary, with the scope of guaran-
tee, contracts the transfer to the creditor, or fiduciary, of the resolvable property of
immovable property.
406 BRAZIL. Law no. 9,514, of November 20, 1997. Available at http://www.planalto.
gov.br/ccivil_03/Leis/l9514.htm. Accessed on: 30 jun. 2018. Art. 26. The debt is overdue
and does not pay, in whole or in part, and the trustee is in default, under the terms of
this article, ownership of the property in the name of the trustee will be consolidated.
§ 1 For the purposes of the provisions of this article, the trustee, or his legally appoint-
ed representative or legally appointed attorney, shall be summoned, at the trustee’s
request, by the official of the competent Property Registry, to satisfy, within fifteen
will only have the consolidation of their property after the payment
of the debt has been implemented407, however, they do not shirk the
responsibilities arising from damages caused in the exercise of such
possession.
Another characteristic of this type of property is the capacity
that both the fiduciary and the fiduciaries have to dispose of the
thing, as long as there is no “inalienability clause”, maintaining the
conditions agreed, especially the obligations408.
Civil Liability could be inserted into a resolution condition
and even the setting of an initial, final term, or both. This proposition
would not occur in the verification of configuration requirements
or assumptions. So much less in the analysis of guilt, deceit, risk,
exclusion, that is, in nothing that doctrine and legislation have built
until today. The suggestion is that the resolutive condition is an element
of verification by the beneficiary of the technology, in the case of the
autonomous robot.
days, the overdue installment. and those that fall due until the payment date, conven-
tional interest, penalties and other contractual charges, legal charges, including taxes,
condominium contributions attributable to the property, in addition to collection and
subpoena expenses. Art. 27. Once the property is consolidated in its name, the fidu-
ciary, within thirty days, counting from the registration date mentioned in § 7 of the
previous article, will promote a public auction for the sale of the property. [...] § 8 The
fiduciator is responsible for the payment of taxes, fees, condominium contributions
and any other charges that fall or will fall on the property, whose ownership has been
transferred to the fiduciary, under the terms of this article, until the date on which the
fiduciary comes to be imitated in possession.
407 __________. Law no. 10,406, of January 10, 2002. Available at http://www.planal-
to.gov.br/ccivil_03/Leis/2002/l10406.htm. Accessed on: 30 jun. 2018. Art. 1,359. Once
ownership has been resolved by the implementation of the condition or by the advent
of the term, the real rights granted pending are also resolved, and the owner, in whose
favor the resolution operates, can claim the power thing of those who own or hold it.
Art. 1.360. If the property is resolved by another supervening cause, the owner, who
acquired it by title prior to its resolution, will be considered the perfect owner, leaving
the person, for whose benefit there was the resolution, an action against the person
whose property was resolved to have the thing itself or its value.
408 VENOSA, Silvio de Salvo. Direito Civil: direitos reais. 9. ed. – São Paulo: Atlas, 2012.
p. 392.
Consumer relations, in our opinion, remain intact. The
production and commercialization of autonomous robots follow
the same rules, with some adjustments, such as guarantee, validity,
return, etc. However, once the technology has been applied to the full
extent of social life, the robot detaches from its main characteristic,
before its circulation. Learning will take place exactly in these social
relationships, with countless daily interactions that will directly
impact self-learning. Let’s look at an example.
A robot marketed to assist and accompany the elderly,
analogous to what we commonly call caregivers, has a series of
interactions, even if limited, in the residence or place where the person
needs care. A free or expensive session of this robot to another family,
which lacks the same assistance for an elderly person, will imply a
sudden change in the environment. That is, it will greatly interfere
with learning. This interference generates an emergent behavior and
the robot alters a given medication, weakening the clinical condition
of the elderly, almost causing his death. In addition to the despair and
distress caused the maintenance costs of assistance increase, including
the return of the robot and the hiring of a new (human) caregiver. Will
the liability for damages be attributed to the previous owner? Should
strict liability be claimed and demand the robot vendor, builder or
programmer?
We believe that the Resoluble Civil Liability is currently
the most appropriate to the situation. Exceptions to the rule will be
assessed in the specific case, provided that the liability’s resolving
condition has been exhausted.
An initial and final term could be implemented as a
resolutive condition within consumer relations, even if initially. The
manufacturer would be subject to liability from the time the robot is
placed on the market and for a certain period of use, provided that
there are rules for changing the environments in which the robots are
exposed and regarding the costly or free session to third parties.
Conclusion
Abstract
Introduction
In recent years, artificial intelligence has become a panacea.
Like any technology initially misunderstood by the general public,
it is about a hope of solving the most varied problems, even when
unnecessary if a minimum of organization institution existed, as in the
case of the Judiciary Power in Brazil, in which, instead of producing
decisions by a simplified form for the facilitation of interpretation of
the information contained therein, artificial intelligence is expected
to do this job in place of legal professionals.
Despite the recent growth of interest in this technology, the
dream of getting to artificial reproduction of human thought is ancient,
preferably automatically and quickly. Thus, artificial intelligence was
developed with the first computers and those solutions were important
for the development of computer science, but limited to the universe
of the machine on which the software ran. However, three factors have
led to an increase in the use of artificial intelligence: the expansion of
storage data, the capacity of processing data and the connection of
computers to the Internet, which has enabled cloud computing. These
new features have made artificial intelligence more accurate and
present in people’s daily lives, including medicine.
The use of artificial intelligence is not limited only to the
technical possibility of using this technology. It involves legal aspects,
which are essential for defining the limits of what is allowed and what
is prohibited, due to the potential problems arising from its use, as
well as the analysis of what is legal and what is illegal, in order to imply
liability in case of damages caused to persons. For methodological
issues, the goal of this paper is to study the legal aspects of the use
of artificial intelligence in medicine. This is an area where there
are great challenges for doctors and other health professionals, by
the restructuring of the profession itself due to the impact of these
technologies and the readjustment of the doctor-patient relationship
mediated by artificial intelligence in diagnostics, with potential risk
situations of privacy invasions.
1. Artificial Intelligence In Medicine
Medicine is a science and an art. A science, because researches
have been done to understand diseases. An art, because this knowledge
is not an end in itself, but is aimed at restoring health, improving the
quality of life and well-being. The development of medicine by clinical
research is fundamental, but some past experiences became a delicate
subject, perhaps traumatic, due to the fact that during World War II,
thousands of prisoners were subjected to unauthorized experiences
that exposed them to high risks, resulting in their death in almost all
cases. For this reason, the 1947 Nuremberg Trials sentenced the doctors
who participated in these unethical experiments and a decalogue
was established regarding the principles that should be observed in
researches with human beings, which unfolded in the principles of
bioethics, which, among others, autonomy and non-maleficence. In
the 1960s, the World Medical Association launched the Declaration of
Helsinki, which regulates research with human beings through soft
law. This legal text was updated several times, and the last of which
was in 2013 in the city of Fortaleza, Brazil.409
As a result of all these researches, the benefits of a balanced
diet, physical activity and the effects of tobacco and alcohol were
understood. The ethology of a lot of diseases was discovered.
Medicines, clinical treatments, surgical techniques and transplants
were developed. A huge field was opened for the use of technologies
in equipment, instruments, prostheses, orthoses and other material
objects inserted in the human body.
409 WMA - The World Medical Association-WMA Declaration of Helsinki - Ethical Prin-
ciples for Medical Research Involving Human Subjects [Internet]. The World Medical
Association. [ cited 2020Jan30]. Available from: https://www.wma.net/policies-post/
wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-hu-
man-subjects/
Medical equipment is a practical application of the laws
of physics in terms of acoustics, such as the stethoscope, as well as
hydrostatics, in the case of pressure gauges, in addition to electricity
and electromagnetism in cardiac devices and radioactivity in
radiographic examinations and treatment of cancer. But in the 1950s,
electronics arose with the invention of the transistor and construction
of the first computers, as well as the basic concepts related to artificial
intelligence. With the development of microelectronics, it was possible
to reduce the price and size of computers and the information storage
media, such as hard drives.
The use of electronic and computer equipment in the medical
field has grown exponentially, due to the fact that pieces of software
are capable of simulating various equipment and processing data
collected by health professionals from a simple blood test to those that
require greater complexity. An important aspect related to the use of
information technology in medicine is the storage of data about the
patient. Health records are documents in which professionals enter
all information concerning the history, diagnosis and procedures
performed. Traditionally, this document was produced on paper.
However, inevitably, they have become electronic, both for practicality
and for the ease and reduction of storage costs.
The Internet, which commercial opening took place in
the 1990s, is another paradigmatic leap within medicine. First, the
circulation of knowledge by the publication of papers on research with
human beings carried out in different parts of the world was modified,
by the transformation of the medical journals: instead of printing them
on paper, they became virtual and easily available on the Internet. It
is known that knowledge in the medical field - as in any other area –
has increased exponentially in small time intervals, which makes it
impossible to keep up to date in the area, even if the professional only
dedicates himself to read everything that is published worldwide. In
addition, cloud computing on the Internet allowed the formation of
large databases - the big data.
With the expansion of possibilities for using artificial
intelligence, medicine has become a very interesting area for its
use. Considering the capacity for artificial reproduction of human
faculties of interpretation, analysis and judgment of information, the
challenge arises of performing medical acts with equipment, which
can make decisions that traditionally are under the responsibility of
a professional. Furthermore, artificial intelligence is a tool perfectly
suited to clinical research, because knowledge in the field of medicine
is essentially empirical, based on statistics, developed through tests
to prove or not the hypothesis developed by the researcher. Artificial
intelligence can assist professionals in making decisions about which
treatment should be adopted by each patient.
Interestingly, artificial intelligence in medicine has been
widely used by the population for years, albeit inappropriately in
several cases. When the person perceives something different from
himself, it is very common to make a search on Google about the
symptoms presented. The results are exhibited and the person reads
various papers and gets a preliminary conclusion. He goes to the doctor,
who often cannot be absolutely informed about new treatments and
medicines. This situation imbalanced the power relationship between
doctor and patient, which gave rise to the idea of a “Doctor Google”.
However, this habit has become an object of interest to data scientists
and medical researchers. Investments in prediction by artificial
intelligence were made and one of the main products developed is
precisely a software with an artificial intelligence algorithm able of
predicting the date of the person’s death.410
410 Cuthbertson A. This Google AI can predict when you’ll die [Internet]. The In-
dependent. Independent Digital News and Media; 2018 [cited 2020Jan30]. Available
from: https://www.independent.co.uk/life-style/gadgets-and-tech/news/google-ai-pre-
dict-when-die-death-date-medical-brain-deepmind-a8405826.html
In addition, the use of information technology has led to
the improvement of equipment, especially in terms of diagnostic
imaging, as in the case of computed tomography and magnetic
resonance imaging. Considering that diagnostic imaging requires
the interpretation of the professional, there is the observation with
the naked eye or by camera, but rather by building bidimensional
monochrome of a three-dimensional structure. This fact may prevent
the identification of the problem, or, the reverse, the generation of a
“false positive”. The use of artificial intelligence in conducting exams
would make them more accurate compared to those in which there
is a greater participation of professionals in the interpretation of
diagnostic imaging.411
Therefore, medicine and technology go together. Due to the
commercialization of this activity, there is inevitably a market appeal
for the use of technologies, which contribute to the restoration of
health with greater speed and accuracy, because greater efficiency
in the activity of doctors would be achieved and, above all, with
risk reductions. Recently, the rise of “Digital Health”, “e-Health” and
even “m-Health” (mobile Health) led the World Health Organization
launch in 2019 the text entitled “WHO Guidelines: recommendations
on digital interventions for health system strengthening”, prepared
in 2018 following a resolution of its General Assembly, because no
careful analyses of the benefits and risks of using technologies in
health care, including telemedicine, the support of digital tools in
medical decision-making, as well as the storage and use of patient data
records, are available up to now.412
411 Shetty S, M.s. Using AI to improve breast cancer screening [Internet]. Google.
Google; 2020 [cited 2020Jan30]. Available from: https://blog.google/technology/health/
improving-breast-cancer-screening/
412 WHO Guideline: recommendations on digital interventions for health system
strengthening [Internet]. World Health Organization. World Health Organization;
2019 [cited 2020Jan30]. Available from: https://www.who.int/reproductivehealth/pub-
lications/digital-interventions-health-system-strengthening/en/
2. Legal Issues Related To The Use Of Artificial Intelligence
In Medicine
The legal issues related to the use of artificial intelligence,
both in medicine, as in any other area, consist of the lawfulness or not
of the use of this technology and what should be the legal consequences
in torts. For now, there are two subjects related to artificial intelligence
in medicine. The first one refers to the diagnostic imaging by artificial
intelligence and the second one, privacy violations generated by the
data analysis enhanced by artificial intelligence.
2.1. Diagnostic Imaging With Artificial Intelligence
The first situation consists in the use of artificial intelligence
in diagnostic imaging. In this field, technology was what has allowed
the preparation of diagnostic results by doctors over the years, despite
the importance of the professional intuition.
The duty not to cause damage to people’s lives and physical
integrity is a fundamental principle of law, both in past and nowadays.
Under Criminal Law, the agent is punished in case of homicide and
personal injury. In Private Law, personality rights are guaranteed
and, more concretely, there is a general prohibition of not causing
injury to others, whose violation entails civil liability for the repair of
damages caused, both in Civil Codes and in the consumer protection
laws. However, as health equipment may actually injury the life
and physical integrity of the person, there is an additional control
conducted by the State, before placing the equipment on the market.
In this sense, in Brazil, article 25 of Law no. 6,360, of 1976, establishes
that “devices, instruments and accessories used in medicine, dentistry and
related activities, as well as those of physical education, beautification or
aesthetic correction, shall only be manufactured, or imported, for delivery
for consumption and exposure to sale, after the Ministry of Health has
decided whether or not registration is required”. Currently, this control
is made by the National Health Surveillance Agency - ANVISA, under
Law n. 9,782, on January 26th, 1999.
As the possibility that diagnostic imaging equipment with or
without artificial intelligence software causing direct and immediate
death or injury in the patient is highly remote, the hypothesis that
raises doubts on the part of jurists is the artificial intelligence software
on the equipment has made a wrong analysis, resulting in a mistaken
diagnosis. In Private Law, this fact is known as the loss of chance
doctrine, which is the loss of the unique opportunity of having avoided
the occurrence of the damage or its reduction.
In this case, the equipment manufacturer is the main liable
for the payment of an award of damages. It is a strict liability, due to
the sole placement of the product in the market. This means that it
must not do this until the risk of mistaken diagnostics is evidenced as
very low, because good faith in the relationship between manufacturer
and users must prevail. It is what art. 931 of the Brazilian Civil Code,
according to which “except in other cases provided for by special laws,
individual entrepreneurs and companies are liable, regardless of fault, for
damages caused by products placed into circulation”.
From the point of view of the patient, the Brazilian Consumer
Protection Code (Law n. 8,078/1990) reinforces the idea that products
or services may cause damages to consumers. Article 6 establishes the
basic consumer rights. The first of which is Article 6, I: “The protection
of life, health and safety against the risks caused by practices in the supply of
products and services considered dangerous or harmful”. And, in the event
of damage caused by products placed on the market, Article 12 of the
Consumer Protection Code establishes the s manufacturer’s strict
liability.
A relevant issue is the use of the development risk doctrine,
according to which the manufacturer could exempt himself from
liability, if it was proven that he was unaware of adverse events
regarding the use of this equipment. In Brazil, this doctrine is not
admitted, nor does it have a legal provision.
Other responsible for repairing damages in the event of
a mistaken diagnostic is the health establishment that uses the
equipment with artificial intelligence to perform diagnostics imaging,
because it assumes the risk of the decision of its use in this activity.
From the patient’s point of view, even if the examination is performed
by equipment, the mistaken diagnosis is legally qualified as damages.
It is a hypothesis of the services supply’s strict liability, under Article
14 of the Consumer Protection Code.
The patient may file an action for damages, both the
manufacturer of the equipment and the health establishment that
used it in the supply of services, by joint and several liability, in case of
mistaken diagnostic made by the equipment. However, if the patient
only files an action against the health establishment, the latter may
subsequently demand compensation from the manufacturer based on
Article 933 of the Brazilian Civil Code. Nevertheless, it is not possible
to denounce the lawsuit by the manufacturer, under the terms of
Article 88 of the Consumer Protection Code, if the action is filed before
a Small Claims Court.
On the other hand, if the manufacturer is exclusively sued for
the payment of damages, an action of restitution against the health
establishment may be filed. However, the nature is fault-based liability,
governed, therefore, by Article 186 of the Brazilian Civil Code, which
the burden of proof of guilt of the supply of services that eventually
has used the equipment wrongly, which resulted in misdiagnosis.
In practice, this type of civil liability will certainly be much less
frequent than in comparison with traditional methods of performing
diagnostic imaging, because the accuracy of the equipment will
be greater than the accuracy of a professional. Although human
intelligence is far superior to artificial intelligence, the latter has
a comparative advantage: the machine does not suffer from fatigue
effects, which may affect the analysis of the professional.
Finally, the professional responsible for the equipment
is ultimately liable for the damage caused to people due to wrong
diagnosis, even if with artificial intelligence, because, under Article 4,
VII, of Law no. 12,842, on July 10th, 2013, which regulates the practice
of medicine in Brazil, the “endoscopic, imaging, invasive diagnostic
procedures and pathological examinations” are private medical
activities. Even if a professional is not liable for the damages caused
to patients, there is no way to avoid professional liability under the
medical association.
2.2. Privacy Violations
The second hypothesis of damages caused by the use of
artificial intelligence in medicine is related to privacy.
The idea of recognizing a right to privacy arose as a way of
preventing information about persons from being obtained without
consent and with which third parties could create embarrassing
situations without just reason. Formulated as the “right to be let alone”,
the right to privacy was structured to protect invasions by journalists,
photographers and the State itself. With the assembly of databases, the
right to privacy was also invoked so that only persons authorized could
have aces only to certain information, as in the case of consulting the
restrictions on access to credit on the market. On the other hand, the
intensive use of social networks has caused a revision on the concept
of privacy: in addition to traditional hypotheses, it is now people
themselves who voluntarily disclose everything about their private
lives. As this disclosure is made on the Internet, it facilitates greatly
this process.
The gathering of data in research with human beings can give
rise to privacy violations. Thus, every study must avoid the generation
of risks to life and health as much as possible, and also ensure that the
data will be used exclusively for this purpose, without the identification
of the participants. As foreseen in the Declaration of Helsinki and, in
the Brazilian case, in Resolution CNS n. 446/2012, such guarantees
must be provided by the researchers to the participants through an
Informed Consent Form, through which the object of research it
explained, the risks involved are clarified, the freedom of withdrawal
of the research is guaranteed at any time, as well as the payment of
compensation in case of damage to personality rights. The study
is controlled by a Research Ethics Committee. The breach of these
rules is considered unethical and its publication is prohibited in any
scientific journal.
With the greater speed of data processing and the expansion
of the data storage capacity, especially through cloud computing, a
large amount of information can be gathered and stored to be used
in one or more studies, or, yet, biobanks formed from materials
extracted from the body of a living or dead people, with or without
genetic material. Depending on the research, the analysis made by
traditional methods was impossible, given the difficulty of processing
and establishing a large number of big data and its relationships in
a short time. However, with artificial intelligence, these analyses can
be carried out more quickly and without human fatigue. Researches
on medicines, including antibiotics and vaccines, have been improved
and the solutions will be offered faster than the traditional methods.
In data analysis carried out using artificial intelligence under
the supervision of a Research Ethics Committee, there is a control
over the potential risks arising from the use of this technology, but
valuable information can be obtained indirectly, as it is with social
networks, where people voluntarily offer information to interested
parties. For example, with the popularization of DNA tests carried out
by the post office to prepare genetic maps related to ancestral origins
and probability of manifestation of genetic diseases, huge databases
and biobanks have been created without ethical control to be analyzed
by artificial intelligence.
Another aspect related to artificial intelligence in research
with human beings is in health records. Although being a patient of the
document, the professional launches information about the person,
which does not know what has been registered, or the meaning and
relevance of these data, nor any idea of the impact they can have in
their own lives. In any healthcare service, the patient’s consent to
write on a health record is not required.
When health records were written on paper forms, the
possibility of gathering this information was quite impossible, because
they were kept in archives by professionals or hospitals. Currently, the
use of information and communication technologies in the health
area created the so-called electronic health record - EHR. In digital
format, this information can be easily gathered from several isolated
health records, without the patient being aware of this fact - despite the
existence of the right to privacy and professional secrecy - becoming a
health dossier on every time the person was sick, what medicines were
used and the body’s responses in all these situations.
Two types of analyses can be made with the use of artificial
intelligence in these cases. The first would be a statistical one, to
understand a specific disease in one population for a certain period
or even in series. The second, more invasive of privacy, although
apparently an isolated case, is one in which one can predict when the
person will die. While a Research Ethics Committee may control or
prohibit the use of clinical research about it due to the surveillance of
the researchers, however, the same guarantee does not exist in terms
of medical records.
Considering the economic dimension of this information,
one cannot be naive that such data will never be used. Just remember
the scandal regarding data collection from Facebook, where user data
was used improperly for the purpose of profiling people. The risk of
misuse of information relating to the probable moment when a person
will die, may inevitably lead to an unobtrusive cost-benefit criterion
being taken into account, in violation of the principle of human
dignity, when refusing treatment, an incentive to euthanasia, as the
reasoning for the unnecessary treatment costs, which is previously
known to be mere palliative, whenever the person will live for a short
time. Information of this type is relevant for those who have to ensure
the economic balance of the health system. Medical treatments will be
provided only to those who have financial resources to pay for them
in full.
Brazilian Law guarantees the protection of privacy. The
Federal Constitution affirms in Article 5, X, that “intimacy, private life,
honor and people’s image are inviolable, and the right to compensation
for material or moral damages resulting from their violation is
guaranteed”. Likewise, Article 21 of the Brazilian Civil Code establishes
that “The private life of the natural person is inviolable, and the judge,
at the request of the interested party, will adopt the necessary measures
to prevent or terminate an act contrary to this rule”. As can be seen,
the traditional rules were sufficient for the protection of privacy in a
world without Internet, in which it was possible to identify the person
who carried out the injury, forcing him to stop his invasive conduct.
The World Medical Association, incidentally, has had this
concern with the misuse of patient data in recent years. Currently,
the “Declaration of Taipei on Ethical Considerations regarding Health
Databases and Biobanks”413 has been edited, which last version was
in October 2016. In this normative text, there are dispositions with
ensuring confidentiality regarding the use of data, and the affirmation
of the right of the person to authorize the storage of his biological
413 WMA - The World Medical Association-WMA Declaration of Taipei on Ethical Con-
siderations regarding Health Databases and Biobanks [Internet]. The World Medical
Association. [ cited 2020Jan30]. Available from: https://www.wma.net/policies-post/
wma-declaration-of-taipei-on-ethical-considerations-regarding-health-databas-
es-and-biobanks/
material for scientific purposes, as well as to revoke the consent
regardless of any justification. Furthermore, a call for governments
to take measures to enforce the people’s interest at the expense of the
interests of stakeholders, as well as a request for all professionals to
prevent that national laws are less protective in comparison with the
provisions of this text in terms of protection of the dignity, autonomy
and privacy of persons.
In the same sense, specific regulations in Brazil on the subject
of medical data protection are in Resolution CNS no. 441, on May 12th,
2011, on biobanks and biorepositories. In the field of health records,
the Resolution CFM no. 1,821/2007, as amended by Resolution CFM no.
2,218/2018, as well as Law no. 13,787, on December 27th, 2018, which
establishes rules for digitization and use of computerized systems
for the safekeeping, storage and handling of the patient’s health
record. In terms of guarantee of privacy, the main rule is Article 4 of
this Law, according to which “the storage of digital documents shall be
protected them from unauthorized access, use, alteration, reproduction and
destruction”. This protection is very low for the world today.
Evidently, the traditional guarantees of protection of the
person’s privacy are insufficient for the reality of the 21st century, in
which there is big data and software with artificial intelligence. In
recent years, countries have sought to legislate on data protection in
a more comprehensive and detailed manner. In Europe, the General
Data Protection Regulation - GDPR (EU 2016/679) was issued in 2016,
which came into force on May 25th, 2018. European Union member
countries have been updating their legislations, including Portugal,
with Law no. 58/2019 and the French Law of June 20th, 2018. In
America, it is worth highlighting the Peruvian Law n. 29,733, on July
3rd, 2011 and, more recently, in Brazil, Law no. 13,709, on August 14,
2018, named General Data Protection Law - LGPD, by Law no. 13,853,
on July 8th, 2019.
Under Articles 7 and 11 of the LGPD, health data are considered
sensitive data. More specifically, Article 11 of this Law establishes that
these data can only receive treatment after the person’s consent (item
I), but they can also receive treatment without the person’s consent
in the event of compliance with legal or regulatory obligations by the
data controller (item II, a). This provision may or may not be adequate
to the legal framework for the protection of health-related data due
to its vagueness. Another hypothesis is when public policies can be
developed (item II, b). As there is no specification about the nature of
these policies, a wide use can offer risk to people. Yet, the provisions
of item II, c, which allows the processing of data by research bodies,
ensuring privacy if possible, as well as the item II, f, when this act is
done in “protection of health, exclusively, in a procedure performed
by health professionals, health services or health authority” are
evidently insufficient to protect persons. At this point, the LGPD is
weaker than the regulation drawn up by the Brazilian National Health
Council, as well as the Declarations of Helsinki and Taipei rules, which
require the person’s consent, while the LGPD exempts such consent.
As LGPD is superficial at this point, it is authorized that such data may
be treated using artificial intelligence software, which can further
increase violations of the right to privacy.
On the other hand, later, to mitigate these risks, Law n. 13,853
inserted rules about the use of sensitive data, according to provisions
of the current paragraphs 4th and 5th of the LGPD.414 Thus, the Brazilian
414 § 4º Communication or shared use between controllers of sensitive personal data related
to health is prohibited in order to obtain an economic advantage, except in the cases related
to the provision of health services, pharmaceutical assistance and health assistance, provided
that that in compliance with paragraph 5 of this Article, including auxiliary services for
diagnosis and therapy, for the benefit of the data subjects ‹ interests, and to allow: (Wording
given by Law n. 13,853, of 2019)
I - data portability when requested by the holder; or (Included by Law n. 13,853, of 2019)
II - financial and administrative transactions resulting from the use and provision of the
services referred to in this paragraph. (Included by Law n. 13,853, of 2019)
Paragraph 5 Operators of private health care plans are prohibited from processing health
data for the practice of selecting risks when hiring any modality, as well as when hiring and
Legislative Power realized the enormous risk arising from the sharing
of sensitive patient data, even prohibiting decisions that result in the
refusal of medical treatment to anyone for economic reasons.
Anyhow, although Article 52 of the LGPD provides for
sanctions, including the payment of fines and even the destruction
of the database, these measures may be ineffective in restoring the
problems caused after the violation of privacy of persons.
Conclusion
Artificial intelligence is a powerful tool for the improvement
of the living conditions of human beings, because it enhances the
quality of various activities, from those simpler, to those more complex,
including the development of more precise equipment and also in the
evolution of scientific knowledge, as in the case of the medical area
through clinical research.
However, like any technology, its use has positive and negative
aspects, with the incentive to produce benefits and prohibiting
conducts aimed at the production of harm. It is not intended to ban
the use of software with artificial intelligence algorithms in medicine,
but it is necessary to establish the limits between the legal and the
illegal, through the current rules. The precautionary principle must
be remembered, according to which not everything that can be done
must be done. In fact, technologies shall be at mankind’s service,
because they have dignity, not vice versa.
Even though there are sufficient legal rules for the payment
of an award of damages in the case of diagnostic imaging, however,
there is a weakness in the legislation regarding the protection of
the person’s privacy. A greater danger lies in the automation of this
type of decision-based on the treatment of sensitive data collected
excluding beneficiaries.
without previous consent. Considering that artificial intelligence is
not a complete one, such as that of the human being, endowed with
multiple intelligences, one must reflect on how this decision-making
in health matters with software equipped with this resource can be
excessively risky for persons, as for society, resulting, in certain cases,
in the discrimination of the human being. Just keep in mind the use
of free live navigation applications with artificial intelligence: it is
suggested to drive to one path, but does not take into account whether
the driver has greater or lesser ability to make difficult handlings
or sudden changes, or whether this suggested path is more or less
dangerous. Or, even, the call center services provided by robots, in
which the only option is to communicate with the software, without
human support. Despite the development of more precise artificial
intelligence systems, there is the feeling that the system is unable to
understand the nuances of human communication.
References
Introduction
415 Mozetic’s objection, that defends the position of the legal impossibility of using
machine-made judgments algorithms, states: “All this derives from a procedural perspec-
tive of the judicial decision understood by the artificial intelligence itself and the Law, in
which the legal argument is understood as both an element of justification of the decision as
fantastic results in many areas, it will not be able to work out a legal
rationale 416.
However, it can also be argued that the “fear” aspect is more
related to the disruption and change of the status quo provided by the
paradigm shift than to the possibility of automation, since there isn’t,
in fact, any transcendental element that makes the legal activity –
including judging cases - distinct from any other intellectual activity
produced by humanity. Precisely for this reason, judicial decisions
are in fact subject to automation to a greater or lesser extent, and this
really is not a problem, but a solution.417
an element of explanation as regards the logical relation between the arguments and the pre-
tension. But there is a big problem here: where is hermeneutics? Does ROSS understand the
world? In short, for the Law, it is a unitary process between understanding, interpretation
and application. For this reason, it is opportune to emphasize the Gadamerian affront to the
challenges of a technological mentality related to Law. An intelligent legal system can not
integrate all these elements, which are essential to reach a decision.” (“Tudo isso deriva de
uma perspectiva processual da decisão judicial compreendida pela própria inteligên-
cia artificial e o Direito, em que o argumento legal é entendido tanto como um ele-
mento de justificação da decisão, conforme apontado acima, como um elemento de
explicação no que se refere à relação lógica entre os argumentos e a pretensão. Mas,
há um grande problema aqui: onde está a hermenêutica? ROSS compreende o mundo?
Em suma, para o Direito, é um processo unitário entre a compreensão, interpretação
e aplicação. Por essa razão, é oportuno salientar a afronta gadameriana frente aos
desafios de uma mentalidade tecnológica relacionada ao Direito. Um sistema jurídico
inteligente não pode integrar todos esses elementos, que são essenciais para se che-
gar a uma decisão.” MOZETIC, Vinícius Almada. Os Sistemas Jurídicos Inteligentes
e o caminho perigoso até a E-Ponderação artificial de Robert Alexy. Disponível em
http://emporiododireito.com.br/leitura/os-sistemas-juridicos-inteligentes-e-o-camin-
ho-perigoso-ate-a-e-ponderacao-artificial-de-robert-alexy. Acesso em 01/12/2017.)
416 “However, despite enormous successes in certain areas such as the field of legal infor-
mation retrieval a large portion of legal problem solving resists to be computerized. Judicial
reasoning can be considered a member of the portion. At any time in history in any country
in the world no computerized formalism for judicial reasoning has ever been employed on
a large scale in everyday practice.” (ARASZKIEWICZ, M. (Ed), ŠAVELKA, J. (Ed). Coher-
ence: Insights from philosophy, jurisprudence and artificial intelligence. Law and Phi-
losophy Library 107. Ed Springer Verlag, 2013. p. 204).
417 In this sense, the theories elaborated by Antônio Álvares da Silva: “The ma-
chine-made judgment of repetitive cases is not the debasement of the judiciary. Instead, it
means it’s modernization to be part of a mass and globalized culture, where there is a prolif-
eration of data and knowledge of all kinds(...) The decision-making function is possible only
in a ‘modelized’ universe in which premises and consequences are accurate and stable. It is
But this discussion is far from being a new issue.
The theoretical possibility of using computers to at least
aid judgments had already been advanced by twentieth-century
theoreticians, including the creator of cybernetics, Norbert Wiener,
for whom “legal problems are communicative and cybernetic, that
is, to promote orderly and repetitive regulation of certain critical
situations” 418
The studies in the field of cybernetics were incorporated
by law theorists, not in the sense of seeking full automation of legal
works, but rather as a means of supporting mechanical procedures to
assist the jurist’s creative activity. 419
common to say that law would never reach this universe because of the permanent variety of
decisions, but in fact what happens is exactly the opposite (...) The exhaustive activity of the
judge will be relegated to complex cases, for which he will have time, provided he is free from
small actions. Every effort to renew the judiciary consists in formalizing legal reasoning as
far as possible. The appeals to the ‘concrete case’, ‘irreplaceable attitude of the judge’, ‘im-
possibility of the machine to replace man’ are traditional mentalizations that today are no
longer insurmountable truths” (“O julgamento por computador de casos repetitivos não
é o aviltamento do Judiciário. Pelo contrário, significa sua modernização para fazer
parte de uma cultura de massas e globalizada, em que prolifera excesso de dados e
de conhecimento de toda espécie [...] A função decisória só é possível num univer-
so ‘modelizado’ em que premissas e consequências são precisas e estáveis. É comum
afirmar-se que o Direito não atingiria jamais este universo, em razão da variedade
permanente das decisões, mas, na verdade, o que acontece é exatamente o contrário
[...] A atividade exaustiva do juiz será relegada aos casos complexos, para os quais terá
tempo, desde que se livre das pequenas ações. Todo esforço para a renovação do judi-
ciário consiste na formalização do raciocínio jurídico até onde for possível. Os apelos
ao ’caso concreto’, ‘atitude insubstituível do juiz’, ‘impossibilidade de a máquina subs-
tituir o homem’ são mentalizações tradicionais que hoje não constituem mais verda-
des intransponíveis” (ÁLVARES DA SILVA, Antônio. Informatização do Processo :
Realidade ou Utopia ? In : Cinco Estudos de Direito do Trabalho. São Paulo: LTR, 2009.
p.108-110.)
418 “die Rechtsprobleme sind kommunikativ und kybernetisch, d. h. sie sind die Probleme der
geordneten und wiederholbaren Regelung gewisser kritischer Situationen” (WIENER, Nor-
bert. Mensch and Menschmaschine. Kybernetik und Gesellschaft. Frankfurt: Athenäum
Verlag. 1966. p. 107).
419 As said by Klug: “First of all you have to eliminate the prejudices. Especially, it would
be a mistake to assume that the introduction of electronic automation in law means at-
tempting to construct “judicial automations.” Nor is it “legislative automation”. Instead,
the correct idea is that machines can take care of certain procedures that are mechanical, so
that the lawyer can enjoy greater freedom for a more productive work, especially for legal
According to Antônio Álvares da Silva, computers can be
used to perform “mechanical procedures, leaving the judge with free
time for complex judgments, studies and reflections.” 420 The author
also explains that a human computing activity has long been done
by courts: “advisors do the research and outline the structure of the
decision, demanding the judge intervention only for proofreading and
the final touch.” 421
In fact, there are several possibilities and fields of action in
which digital computers can help human work, contributing to its
optimization and training or even to perform human work on their
own422.
creation work”. (“Pero ante todo hay que eliminar prejuicios. Especialmente, sería un
error suponer que la introducción de autómatas electrónicos en el derecho significa el
intento de construir “autómatas judiciales”. Tampoco se trata de “autómatas legislati-
vos”. Antes bien, la ideia correcta es que las máquinas se pueden harcer cargo de cier-
tos procedimientos que son mecánicos, con el objeto de que el jurista pueda gozar de
mayor libertad para el trabajo más productivo, sobre todo para el trabajo de creación
jurídica.” (KLUG, Ulrich. Lógica jurídica. Tradução para o espanhol de J.C. Gadella.
Santa Fé de Bogotá, Colômbia: Editorial Temis S.A, 1998. p. 22-226.)
420 “procedimentos mecânicos, deixando ao juiz tempo livre para julgamentos comple-
xos, estudos e reflexões.” (ÁLVARES DA SILVA, Antônio. Informatização do processo:
Realidade ou Utopia ? In: Cinco Estudos de Direito do Trabalho. São Paulo: LTR, 2009.
p.108.)
421 “essa atividade já é feita pela delegação que se faz a assessores para a pesquisa e esboço
da estrutura da decisão, ficando o juiz apenas para a conferência e o toque final.” (ÁLVARES
DA SILVA, op. cit. p. 108.)
422 “[…] as optimizers. There are many opportunities to leverage machine intelligence to
help improve the accuracy and efficiency of human computation algorithms. Machine learn-
ing techniques, such as active learning, can help reduce the cost of human computation by
choosing only informative queries to ask. […] as enablers. As human computation systems
are built to handle increasingly complex tasks done by increasingly larger crowds (e.g., to
generate disaster relief plan), we need to use machine intelligence to coordinate individuals,
and to make sense of, organize and display information to workers. In other words, AI algo-
rithms can be used to make humans compute better. […] as workers. For many tasks, ma-
chines actually outperform humans, both in terms of accuracy and speed. One can imagine
future human computation systems to leverage both AI and humans as workers to perform
different tasks they are better at. An effective human computation system should be able
to interweave machine and human capabilities seamlessly. This idea is not new; many re-
search concepts familiar to the AI community./ [...].” (LAW, Edith; VON AHN, Luis. Human
computation. Synthesis Lectures on Artificial Intelligence and Machine Learning, #13.
2011 by Morgan & Claypool Publishers. p. 3).
There are, however, key aspects of developing machine-made
judgment algorithms to be analyzed.
423 “The task of data retrieval is one of the most basic, pervasive, and important of all the
functions performed by lawyers and judges. This includes the activity which lawyers com-
monly refer to as “legal research,” but also considerably more. It is important to note that
when lawyers use the term “legal research” they mean library searching, whereas scientists
use the term “research” to mean laboratory experimentation. For the sake of both clarity and
generality the term “data retrieval” is more useful in the present context. One of the princi-
pal aspects of data retrieval in the law is that of finding applicable, analogous, or relevant
precedential authority in the reported cases for determination of some current question. In-
deed, a large part of the formal professional education of the lawyer consists of training and
exercise in the analysis of problems, the use of a legal vocabulary, and the use of legal index
systems in order to perform this task.”(LOEVINGER, Lee. Jurimetrics: the methodology
of legal inquiry. New York, Basic Books, 1963. p. 9).
424 “la creazioni di una banca di dati giuridici è un obiettivo di interesse generale e come
tale deve essere realizzato dalla mano pubblica.”( PAGANO, Rodolfo. Informatica e diritto.
Milano: Dott. A Giuffrè Editore, 1986. p. 61).
With the building of this database, it will become possible
to treat legal information in accordance with the techniques and
procedures of information science. In this way, some kind of
algorithmization of Law can be considered, in the sense that it will be
possible to extract the elementary tasks that allow legal conclusions
and are essential to the development of a machine-made judgment
algorithm.
However, those researches developed in the last century found
limits in the capacity of possible tasks accomplished by the computers
of the time, as well as in the difficulty of elaborating algorithms
that could work with a dynamic database without the possibility to
incorporate the advances of the jurisprudence.
Fortunately, the processing power of today’s computers
makes it possible to expand beyond Turing’s deterministic systems
through the development of machine learning techniques that, as
defined by Samuel425, can make an impact in several areas such as the
Law426, with remarkable advantages as showed by Mackay427. Several
425 “Machine Learning is a field of study that gives computers the ability to learn without
being explicitly programmed (…) Programming computers to learn from experience should
eventually eliminate the need for much of this detailed programming effort” (SAMUEL, Ar-
thur. L. “Some Studies in Machine Learning Using the Game of Checkers.” IBM Journal
of Research and Development, v. 3, p. 210. Issue: 3, 1959. p. 210).
426 “Thus, it seems that any legal problem, as well as any other problem, can be solved if
and only if an adequate collection of information is acquired and processed to the form of a
solution. At this point one can hardly avoid the obvious parallel to the very well established
concept of algorithm that is usually used within the computer science. To understand the
procedure of legal problem solving within the framework of algorithms and computer science
one must at first be able to recognize the information that are—to put it in legal terminolo-
gy—relevant to the given problem. These information can be considered an input to the pro-
cess. In case of algorithms we usually speak of ‘some value or set of values’ that is an input of
an algorithm. Secondly, it is necessary to characterize the information that is to be regarded
as an output of the legal problem-solving process. Since in case of algorithms we once again
speak of ‘some value or set of values’ in case of legal problem-solving procedures we can settle
with the statement that the output of the process is the information relevant to the solution
of the problem. In this sense, both algorithm and legal problem solving procedure can be
understood as a ‘sequence of [ : : : ] steps that transform the input into the output” (ARASZ-
KIEWICZ, M. (Ed), ŠAVELKA, J. (Ed). Coherence: Insights from Philosophy, Jurispru-
dence and Artificial Intelligence. Law and Philosophy Library 107. Ed Springer Verlag,
2013. p. 204).
427 “Machine learning allows us to tackle tasks that are too difficult to solve with fixed pro-
grams written and designed by human beings. From a scientific and philosophical point
algorithms - designed to perform different tasks - can be submitted
to different machine learning techniques, depending on the expected
result, according to Ayodele.428
But it is not necessary for the machine to reinvent the wheel.
“Intelligent” behavior can be built through a previously modulated
knowledge-based system. In this regard, the algorithm will seek
to provide the most appropriate results for a given input from the
information search in a pre-existing database. It is the so-called case-
based reasoning (CBR) model that consists of an AI field of study
that uses a large case library for consultation and problem solving,
where the presented problems are solved, through the recovery and
consultation of cases already solved and the consequent adaptation of
the solutions found.429
Regardless of the discussion of which model of the system
should be developed, as noted by Pagano430, those kinds of systems also
zione del diritto mediante sistemi automatizatti” (PAGANO, Rodolfo. Informatica e diritto.
Milano: Dott. A Giuffrè Editore, 1986. p. 591).
431 Cf. CONSELHO NACIONAL DE JUSTIÇA. Caderno PJe - Processo Judicial Eletrôni-
co. 2016. Disponível em: <http://www.cnj.jus.br/files/conteudo/arquivo/2016/09/551be-
3d5013af4e5013 af4e50be35888f297e2d7.pdf. Acesso em: 22 de setembro de 2016.
cost of developing such an organized database consists of extracting
information from the physical-analog environment and converting it
into digital data.
The development of PJe has been accelerated in the last few
years because most of the relevant data and information were no longer
restricted to the analog medium, which required a time-consuming
process of reworking the computer with the data contained in the
paper but available directly in digital media, facilitating the process of
data analysis by the machine.
Therefore, with an increasing, unified and online 24/7 legal
database, with just a few minutes of searching the internet, one can
obtain precedents from the most diverse courts in Brazil.
The database contained in the PJe system is already stored
in digital format and contains not only sentences and judgments of
the courts, but the content of most petitions formulated by lawyers,
documents and expert reports.
Therefore, perhaps for the first time in human history, we
have the material conditions (a 100% computerized legal database)
and the technological possibility of developing a reliable and cost-
effective machine-made judgment algorithm.
The PJe system already provides a mechanism for filtering,
analyzing, consulting and compiling relevant legal data that is
essential for legal activity because, with the massive overload of legal
information that arrives at all times through the various means of
communication, it’s being impossible for a human judge to be up to
date with the most recent decisions. It is the phenomenon of “overdose
of information”, which afflicts all contemporary society, but especially
legal professionals. 432
432 “Legal professionals, be they judges or lawyers, handle information in order to take
decisions. As such they are vulnerable to the Information Overload phenomenon. Moreover,
increasingly more non-legal professionals have to deal with the Law due to increasing reg-
ulations in for example environmental protection and public security in buildings.” (BEN-
JAMINS, V. V.Richards et al. Law and the Semantic Web, an Introduction. In: Lecture
The excess of information, while giving the judge greater
subsidies for the rendering of decisions, also implies the need to
compile and adjust all the information received for the analysis of the
concrete case433.
Given the possibilities of PJe system, there are already some
initiatives that aim to use that database and resources to create new
systems such as the “assistant for decision-making”434 and “indexing
and retrieval of information”435. The next logical step is the development
438 “A decision tree is a binary tree where each internal node is labelled with a variable,
and each leaf is labeled with 0 or 1. The depth of a decision tree is the length of the longest
path from the root to a leaf. […] An assignment determines a unique path from the root to a
leaf: at each internal node the left (respectively right) edge to a child is taken if the variable
named at that internal node is 0 (respectively 1) in the assignment. The value of the function
at the assignment is the value at the leaf reached” (RIVEST. Ronald L. Learning decision
lists. Machine Learning 2:229-246, 1987.Kluwer Academic Publishers, Boston. p. 233)
439 “The neural network resembles the brain on two points: knowledge is gained through
learning steps and synaptic weights are used to store knowledge. A synapse is the name given
to the existing connection between neurons. In the connections are assigned values, which
are called synaptic weights. This makes it clear that artificial neural networks have in their
constitution a series of artificial (or virtual) neurons that will be connected to each other,
forming a network of processing elements.” (ALECRIM, Emerson. Redes neurais artificiais.
2004. Disponível em: <http://www.infowester.com/redesneurais.php>. Acesso em: 03
de setembro de 2016.no original: A rede neural se assemelha ao cérebro em dois pon-
tos: o conhecimento é obtido através de etapas de aprendizagem e pesos sinápticos
são usados para armazenar o conhecimento. Uma sinapse é o nome dado à conexão
existente entre neurônios. Nas conexões são atribuídos valores, que são chamados de
pesos sinápticos. Isso deixa claro que as redes neurais artificiais têm em sua consti-
tuição uma série de neurônios artificiais (ou virtuais) que serão conectados entre si,
formando uma rede de elementos de processamento).
440 “Numerous technological means exist in artificial intelligence (AI) for the use of legal
knowledge in intelligent information systems. Expert systems provide for the possibility of ef-
fective explanation of reasoning, but they need a prior formalisation in the form of inference
rules. Neural networks avoid the phase of formalisation, but they do require a learning phase
and, moreover, they lack explanation abilities. Categorizations could be activated directly
in line since they are neural structures and not a stored memory.” (BOURCIER, Daniele.
Institutional Pragmatics and Legal Ontology Limits of the Descriptive Approach of Texts. In:
automation than the decision tree system, by allowing the network
inputs themselves to feed the database to be scanned by the algorithm
without the need of building a structured database, they are not
efficient to solve the inscrutability - or “black box” - a problem since
the thousands of calculations of the cognitive procedure used by those
algorithms are not linear and explicit, as Warner Jr points out441.
It should be emphasized that inscrutability is not a problem
restricted to computers. The decisions made by human judges are also
subject, to some degree, to this problem442.
The inscrutability is a key aspect to be considered in
developing machine-made judgment algorithms because the
443 “It is a function of the correctness of its implementation (what algorithm designers tend
to focus on) and the correctness of its learned behavior (what lay users care about). As a
recent example, take Microsoft’s AI chatbot, Tay. The algorithms behind Tay were properly
implemented and enabled it to converse in a compellingly human way with Twitter users.
Extensive testing in controlled environments raised no flags. A key feature of its behavior was
the ability to learn and respond to user’s inclinations by ingesting user data. That feature
enabled Twitter users to manipulate Tay’s behavior, causing the chatbot to make a series of
offensive statements. Neither its experience nor its data took novelty in a new context into
account. This type of vulnerability is not unique to this example. Learning algorithms tend
to be vulnerable to characteristics of their training data. This is a feature of these algorithms:
the ability to adapt in the face of changing input. But algorithmic adaptation in response
input data also presents an attack vector for malicious users. This data diet vulnerability
in learning algorithms is a recurring theme. (OSOBA, Osonde; WELSER IV, William. An
intelligence in our image.” The Risks of Bias and Errors in Artificial Intelligence. Santa
Mônica, Rand Coporation Ed. 2017. p. 04)
444 “Algorithms aren’t subjective. Bias comes from people.” (HARDY, Quentin. Determin-
ing Character With Algorithms. New York Times 07/27/2015, page B5 of the NewYork
human-made decisions and machine-made decisions and, because of
that, bias is more a constant than a problem regarding the development
of a system. Thus, it’s even possible to argue, as noted by Surden445 that
even inscrutable machine-made decisions can be, in fact, less biased
than those made by human judges.
Therefore, there are two ways to proceed.
Machine-made judgment algorithms can be developed to
avoid the inscrutability problem with the use of decision-tree based
systems, but those algorithms will be less efficient and will provide
worse output than the ones that can be obtained with the use of other
more advanced and effectible machine learning techniques that have
the downside of being inscrutable, thus resulting in a slow and more
expensive development of the field.
However, it can be argued that there is no need for machines
to be designed to avoid inscrutability since human-made judgments
already are susceptible to the problem of inscrutability and are
validated under the rule of law.
In that way, there are two aspects which should be considered
for the creation of a “Legal Turing Test”: a first aspect, which is focused
only on the quality and precision of the output and the second aspect,
edition.)
445 “Implicit in such a system of written opinions is the following premise: that the judge ac-
tually reached the outcome that she did for the reasons stated in the opinion. In other words,
the justifications that a judge explicitly expresses in a written opinion should generally cor-
respond to that judge’s actual motivations for reaching a given outcome. Correspondingly,
written legal decisions should not commonly and primarily occur for reasons other than
those that were expressly stated and articulated to the public. (...) Since machine learning
algorithms can be very good at detecting hard to observe relationships between data, it may
be possible to detect obscured associations between certain variables in legal cases and partic-
ular legal outcomes. It would be a profound result if machine learning brought forth evidence
suggesting that judges were commonly basing their decisions upon considerations other than
their stated rationales. Dynamically analyzed data could call into question whether certain
legal outcomes were driven by factors different from those that were expressed in the language
of an opinion.” (SURDEN, Harry. Machine Learning and Law. Washington Law Review,
Vol. 89, No. 1, 2014. Disponível em SSRN: https://ssrn.com/abstract=2417415. Acesso
em 02/11/2017. p.108-109).
which is focused on solving the inscrutability problem, demanding
that the output should also retrieve the disclosure of the decision-
making process of the machine.
But, in fact, a machine that can pass only on the first aspect of
a “Legal Turing Test” is already usable in a large scale.
As already demonstrated, the advent of an electronic lawsuit
system (PJe), which contains virtually all the elements necessary for
data analysis (petitions, documents and sentences) in its database,
allows the algorithm to search the relevant information for the
construction of the corresponding output from a CBR model with the
aid of machine learning mechanisms.
Thus, a text generated by an automated system that meets
the requirements provided by law and can present textual cohesion
sufficient to be indistinguishable from a similar work elaborated by a
human being, if signed and validated by a human magistrate, will be
a possible decision and with formal fulfillment of the requirements of
the legislation.
So, in the case that an output attends all those requirements, it
can be considered that the machine-made judgment algorithm passed
on the first aspect of the “Legal Turing Test” and, if that decision is
validated by a human judge it will be indistinguishable from a human-
made decision.
Therefore the final decision regarding the use and
development of those systems - in a way that makes room for the
upsides of the technology and minimizes the downsides - demands a
human-made decision in terms of regulation446.
446 Which this author has already explained in more detail in another work. Cf. VAL-
ENTINI, Rômulo Soares. Julgamento por computadores? As novas possibilidades da
juscibernética no século XXI e suas implicações para o futurodo direito e do trabalho
dos juristas. Belo Horizonte: UFMG. 2018. (tese de doutoramento)
Conclusions
References