Artificial Intelligence and Lawyers
Artificial Intelligence and Lawyers
Artificial Intelligence and Lawyers
The defence of rights and the role of the lawyer in the application of
artificial intelligence systems
1. Foreword
In the period between the last G7 meeting in Japan - the 49th summit
held from 19 to 21 May 2023 in the city of Hiroshima - and the one to
be held in Italy next June, there have been numerous interventions by
public and private institutions to improve knowledge of Artificial
Intelligence, to understand its economic potential, to chase its
extraordinarily rapid developments, and to attempt to approximate the
models of its discipline. The common goal of the interventions was to
make the best use of the advantages of the new, or at least recent,
technological revolution and to prevent the risks it may create for the
whole of humanity.
Not that measures had not already been taken before the Hiroshima
meeting. On the contrary, the European Union started to deal with
artificial intelligence as early as 20101 , but it was mainly concerned
with the digital market and, on the sidelines, the protection of
personal data, with the approval of the 2016 Regulation(GDPR)2 . The
interest in AI subsequently matured with an extensive activity of
research, consultation and drafting of legal texts, particularly focused
on a segment of the relationships between private individuals that are
established through these systems, i.e. civil liability and the
consequent compensation for damage caused by self-propelled
machines, maschine learning, robots and driverless cars.3 Obviously
1
See the thorough research by Pajno,Donati,Perrucci (ed.) Artificial intelligence and law: a reivolution? I.
Fundamental rights, personal data and regulation; II..Administration, responsibility, jurisdiction; III.Intellectual
property, society and finance, Bologna, 2022; Alpa (cur.), Diritto e intelligenza artificiale.Profili
generali,soggetti, contratti, Responsabilità civile, diritto bancario e finanziario, processo civile, Pisa, 2020; Alpa,
L' intelligenza artificiale.Il contesto giuridico, Modena, 2021;
2
Zorzi Galgano (cur.),Persona e mercato dei dati.Riflessioni sul GDPR, Padova,2019
3
See in the large number of references, Di Donna, Artificial Intelligence and Remedies, Padua 2022; Calabresi
and Al Mureden, Driverless cars.Artificial intelligence and the future of mobility, Bologna,2021
1
the legal discourse involved the administration of justice and in
particular predictive justice, but the treatment of this aspect would
take us far4 . It is an aspect to which lawyers are very sensitive, as can
be seen from the great work done by the CCBE in this regard.5
4
V. Colomba, Il futuro delle professioni legali con l'AI: cosa verrà dopo la giustizia predittiva?, in Agenda
Digitale, 5 April 2023; Barberis, Giustizia predittiva: ausiliare e sostitutiva. An evolutionary approach,in MILAN
LAW REVIEW, Vol. 3, No. 2, 2022; Carleo A. (cur.) Legal calculability, Bologna, 2017, with writings by Guido
Alpa, Giovanni Canzio, Alessandra Carleo, Massimo De Felice, Giorgio De Nova, Andrea Di Porto, Natalino
Irti, Giovanni Legnini, Franco Moriconi, Carlo Mottura, Mario Nuzzo, Valerio Onida, Filippo Patroni Griffi,
Alberto Quadrio Curzio, Pietro Rossi; Zaccaria G., The responsibility of the judge and the algorithm,
Modena,2023
5
See the documentation available on the CCBE website with critical comments on the texts proposed by the
European Union.
6
We, the Leaders of the Group of Seven (G7), stress the innovative opportunities and transformative potential
of advanced Artificial Intelligence (AI) systems, in particular, foundation models and generative AI. We also
recognise the need to manage risks and to protect individuals, society, and our shared principles including the
rule of law and democratic values, keeping humankind at the centre. We affirm that meeting those challenges
requires shaping an inclusive governance for artificial intelligence. Building on the progress made by relevant
ministers on the Hiroshima AI Process, including the G7 Digital & Tech Ministers' Statement issued on
September 7, 2023, we welcome the Hiroshima Process International Guiding Principles for Organizations
Developing Advanced AI Systems and the Hiroshima Process International Code of Conduct for Organizations
Developing Advanced AI Systems (Link). In order to ensure both documents remain fit for purpose and
responsive to this rapidly evolving technology, they will be reviewed and updated as necessary, including
through ongoing inclusive multistakeholder consultations. We call on organisations developing advanced AI
systems to commit to the application of the International Code of Conduct.
We instruct relevant ministers to accelerate the process toward developing the Hiroshima AI Process
Comprehensive Policy Framework, which includes project based cooperation, by the end of this year, in
cooperation with the Global Partnership for Artificial Intelligence (GPAI) and the Organisation for Economic
Co-operation and Development (OECD), and to conduct multi-stakeholder outreach and consultation, including
with governments, academia, civil society, and the private sector, not only those in the G7 but also in the
economies beyond, including developing and emerging economies. We also ask relevant ministers to develop a
work plan by the end of the year for further advancing the Hiroshima AI Process.
We believe that our joint efforts through the Hiroshima AI Process will foster an open and enabling environment
where safe, secure, and trustworthy AI systems are designed, developed, deployed, and used to maximise the
benefits of the technology while mitigating its risks, for the common good worldwide, including in developing
and emerging economies with a view to closing digital divides and achieving digital inclusion. We also look
forward to the UK's AI Safety Summit on November 1 and 2.
2
Among the many initiatives of this period are the resulting Guiding
Principles and Code of Conduct on Artificial Intelligence. They were
highly appreciated in the European context7 .
Organisations should use, as and when appropriate commensurate to the level of risk, AI
systems as intended and monitor for vulnerabilities, incidents, emerging risks and misuse after
deployment, and take appropriate action to address these. Organisations are encouraged to
consider, for example, facilitating third-party and user discovery and reporting of issues and
vulnerabilities after deployment. Organisations are further encouraged to maintain appropriate
documentation of reported incidents and to mitigate the identified risks and vulnerabilities, in
collaboration with other stakeholders. Mechanisms to report vulnerabilities, where appropriate,
should be accessible to a diverse set of stakeholders.
This should include publishing transparency reports containing meaningful information for all
new significant releases of advanced AI systems.
Organisations should make the information in the transparency reports sufficiently clear and
understandable to enable deployers and users as appropriate and relevant to interpret the
3
of the responsibility of operators for the risks introduced into society.
It is likely that this issue was considered resolvable within the
individual legal systems concerned, and in the European case, it is
likely that an ad hoc directive or regulation was considered. It was
debated whether to resort to a supplement to the producer liability
regulation or to an ad hoc regulation. The second alternative
prevailed, with persuasive reasons, even though the concept of risk
and liability for created risk itself, which recalls the German models
of the 1940s, are dogmatically questionable. In order to tighten
model/system's output and to enable users to use it appropriately, and that transparency
reporting should be supported and informed by robust documentation processes.
4. Work towards responsible information sharing and reporting of incidents among
organisations developing advanced AI systems including with industry,
governments, civil society, and academia.
This includes responsibly sharing information, as appropriate, including, but not limited to
evaluation reports, information on security and safety risks, dangerous, intended or unintended
capabilities, and attempts AI actors to circumvent safeguards across the AI lifecycle.
5. Develop, implement and disclose AI governance and risk management policies,
grounded in a risk-based approach - including privacy policies, and mitigation
measures, in particular for organisations developing advanced AI systems.
This includes disclosing where appropriate privacy policies, including for personal data, user
prompts and advanced AI system outputs. Organisations are expected to establish and disclose
their AI governance policies and organisational mechanisms to implement these policies in
accordance with a risk based approach. This should include accountability and governance
processes to evaluate and mitigate risks, where feasible throughout the AI lifecycle.
These may include securing model weights and algorithms, servers, and datasets, such as
through operational security measures for information security and appropriate cyber/physical
access controls.
7. Develop and deploy reliable content authentication and provenance
mechanisms, where technically feasible, such as watermarking or other techniques
to enable users to identify AI-generated content
This includes, where appropriate and technically feasible, content authentication such
provenance mechanisms for content created with an organisation's advanced AI system. The
provenance data should include an identifier of the service or model that created the content,
but need not include user information. Organisations should also endeavour to develop tools or
APIs to allow users to determine if particular content was created with their advanced AI system
such as via watermarks.
4
liability for high risks, it has been considered to presume a causal link
when the circumstances of the case suggest that the damage most
likely could have resulted from the use of AI.
This includes conducting, collaborating on and investing in research that supports the
advancement of AI safety, security and trust, and addressing key risks, as well as investing in
developing appropriate mitigation tools.
9. Prioritise the development of advanced AI systems to address the world's
greatest challenges, notably but not limited to the climate crisis, global health and
education
These efforts are undertaken in support of progress on the United Nations Sustainable
Development Goals, and to encourage AI development for global benefit.
Organisations should prioritise responsible stewardship of trustworthy and human-centric AI
and also support digital literacy initiatives.
10. Advance the development of and, where appropriate, adoption of international
technical standards
This includes contributing to the development and, where appropriate, use of international
technical standards and best practices, including for watermarking, and working with Standars
Development Organizatios (SDOs).
11. Implement appropriate data input measures and protections for personal data
and intellectual property
Organisations are encouraged to take appropriate measures to manage data quality, including
training data and data collection, to mitigate against harmful biases.
Appropriate transparency of training datasets should also be supported and organisations
should comply with applicable legal frameworks.
9
Congressional Research Service, Highlights of the 2023 Executive Order on Artificial Intelligence for Congress ,
Washington, Updated April 3, 2024
5
protection of privacy, equality of citizens and protection of civil
rights, protection of consumers, clinical patients and students,
protection of workers, proper use of AI by public institutions,
promotion of innovation and competition, and, perhaps for electoral
reasons, promotion of American leadership abroad.
These are not normative statements set out in the form of a command,
but recommendations and commitments that the Administration
makes to citizens and businesses: this is a significant step because it
was precisely the defence of personal data that was the obstacle to the
approval of the Transatlantic Trade and Cooperation Treaty between
the US and the EU. It should be emphasised with favour that the
executive order does not privilege the market, even though the major
global players in artificial intelligence are based in the United States
and, in view of their behaviour that is not always in line with
European rules, are the subject of substantial sanctions imposed by
the Court of Justice, national antitrust authorities and personal data
protection authorities. As is well known, the measure that has caused
a great stir and has been followed by the Garanti in other European
countries concerns the temporary suspension of CHATGPT's activity
imposed by the Italian Garante (measures of 30 March and 11 April
2023).
10
Among the many comments are Donati, Fundamental Rights and Algorithms in the Proposal for a Regulation
on Artificial Intelligence, in Il diritto dell' Unione europea, 2021,n.3-4,p.453 ss.Finocchiaro, Intelligrenza
artificiale.quali regole?,Bologna, 2024,p. 115 ss.; Balaguer Callejon, La costituzione dell'algoritmo, Milano,2023
11
See the position paper of 8 October 2021 available on the CCBE website
6
And the draft directive on liability for the use of artificial intelligence
systems.
The Regulation, its lights and shadows, its shortcomings and the
underlying political choices have been much discussed.
The regulation does not change the rules on personal data. Rather, it is
concerned with introducing certain prohibitions on practices that may
harm the dignity and rights of the individual, such as the use of
subliminal techniques, the manipulation of persons, taking advantage
of the ability to choose of weak, vulnerable persons, facial recognition
practices, the ascertainment of emotions, the use of biometric
techniques not required for the investigation of crimes, and then
distinguishes between high-risk AI systems, listed in an annex, and
moderate-risk systems. For the former, it provides for precautionary
principles and appropriate safeguards. It takes care to allow for
requests for information to ascertain the transparency of supplier and
distributor data, to specify the behaviour criteria for importers, and the
compliance obligations of systems, with relevant certifications. It
also provides for the drafting of codes of conduct, measures to
support innovation, and a complex system of controls with the
establishment of special authorities.
7
It is a complex system with a high degree of bureaucratisation, and,
except for the prohibitions, it is not particularly detailed even though
the text is extensive. In addition, these are 'horizontal' rules that only
consider the risk classification of systems, but do not dictate specific
disciplines in the fields of application, such as medicine,
communication, transport, and so on.
8
principles for this new form of globalisation to which we give the
name Artificial Intelligence, so that it may be used for beneficial
purposes and not for purposes of war and destruction.
Lawyers are engaged on several fronts: first of all, they must know
the rules in order to be able to apply them, and, indeed, they are often
the authors of the rules themselves either because they are part of the
Parliaments that approve them or the governments that propose them,
or the institutions that control them; they also perform consultative
functions and therefore must prepare research and reports that
necessarily involve knowledge of other sciences, computer science,
engineering, mathematics, cognitive sciences and others. They must
also suggest the choices with which to regulate these phenomena, for
which comparative analysis is particularly enlightening. They
participate in supervisory institutions to verify that Artificial
Intelligence systems are correctly applied.
9
But above all, in their capacity as lawyers, they have the task of
protecting the person, to fulfil the mission entrusted to them for so
many centuries, because as our forefathers used to say,< Hominum
causa omne ius constitutum est".
Among the many issues that can be considered, and which have
already been the subject of extensive analysis, three in particular seem
to me to be the tasks to which lawyers are called and which require
some further clarification: (i) to identify the models with which to
regulate AI, given that the collection and knowledge of rules already
constitute an enormous labyrinth, given the many international
European and national institutions dealing with the subject, (ii) to
understand the way in which lawyers can exploit the benefits of AI in
the performance of their work (iii) and above all to identify, moment
by moment, the rights that must be safeguarded.In short, how to
organise the work and set up our noble profession to facilitate the
rational development of AI and how to protect the individual from the
risks that AI may entail.
4. Normative models
10
For jurists, this is a great challenge, which is only now universally
perceived thanks to the enormous cultural debate that is taking place,
because of the enormous production of books, essays and statements
that follow one another day after day, but above all because of the
dilemmas that artificial intelligence poses. These are dilemmas of
great moment, which cannot be overcome with maximalist
resolutions: scientific and technological development cannot be halted
except in exceptional cases; the artificial intelligence market is
expanding at great speed and producing high profits and creating new
job opportunities; the professions, and the legal profession in
particular, cannot be overtaken by these phenomena, however
complex they may be.
We have to take into account the interests at stake, which, as has been
established in the course of the discussions on the subject, concern the
building and development of Artificial Intelligence, its use in all areas
of the economy and other human sectors - from communication to
transport, medicine and finance - we have to compare the interests and
balance them, because each of them is relevant, but does not exhaust
the horizon we have to consider.
11
Press reports indicated that the European Data Protection Supervisor
expressed his disappointment with the Artificial Intelligence (AI)
treaty negotiated in Strasbourg, stating that it deviated greatly from its
original purpose. The Council of Europe had originally decided to
develop a legally binding international convention to uphold human
rights standards without harming innovation in the development of
artificial intelligence. But the text was considerably watered down
from its original version during the negotiations. It was 'a missed
opportunity to establish a strong and effective legal framework' for
the protection of human rights, especially because of the debated
limitation of the scope of the convention to public bodies only.
Unlike the EU's AI Act, which will create new compliance obligations
for a range of AI actors (such as suppliers, importers, distributors and
retailers), the UK government is developing a principles-based
framework for existing regulators to interpret and apply in their
specific sectoral areas.
12
agreement was reached on the drafting of codes of conduct, especially
with regard to the coordination between AI development and
copyright protection.
An even different model has emerged in this period between the two
G7 meetings is that of the US, where President Joe Biden's executive
order, mentioned above, has received wide support on the Democratic
side.
5. Definitions of IA
13
Curiously enough, the first question that arises is precisely the
definition of the subject matter, i.e. what the IA that is the subject of
standardisation consists of.
15
With respect to the G7 principles, it seems to me that the principles
are all respected.
However, the European model must be considered in its entirety,
including the GDPR. We must not forget that the very differences in
the regulation of personal data in Europe and the US was the main
cause of the failure of the Transatlantic Trade and Investment Treaty
(TTIP).
16
always talking about civil law matters - can be facilitated by the use
of these systems, although a technician is always needed, and the
lawyer's ability to correctly interpret the results of the advice.
I do not envisage trial acts written with the help of ChatGPT, because
one thing is certain: artificial intelligence gives us millions of data,
but it can hardly find two identical cases from which one can draw
solutions to be mechanically applied to the case being studied. What
is more: this system is based on the past, on facts that happened in the
past while the cases we have to solve happened in the present, in an
environment that may have changed, and in a cultural context that
may have evolved.
18