Speech

Addressing justice ministers at EU Justice Home Affairs Council meeting

Speaker
Sirpa Rautio
FRA Director Sirpa Rautio addresses EU Justice Ministers during the EU's Justice and Home Affairs Council meeting on 23 July 2024 in Budapest. The Director focused on AI and its use in the digitalisation of justice.

I would like to thank the Hungarian Presidency for having invited FRA to speak on this important subject today.

The use of artificial intelligence, or AI, in the justice sector offers great potential. While it should be noted that other digital solutions and new technologies also offer scope for advancement, the focus – and real concerns – lie with AI and its uses.

The topic of AI in the justice sector is nothing new. FRA has been working on AI and fundamental rights for several years now. Our analysis has accompanied the major policy developments in this area, including the AI Act and, at the Council of Europe, the ‘Framework Convention on AI and human rights, democracy and rule of law’.

With this in mind, and given the evidence that FRA has gathered over the years, I would like to recall some of the potential dangers of the use of AI, from a fundamental rights perspective. I will also outline possible solutions and ways forward, which will allow us to benefit from technological advancements in the justice area.

I think a natural starting point is to first acknowledge how far we have come. The past years have seen significant progress in the availability of AI tools. As of today, the Council of Europe’s ‘Resource Centre on Cyberjustice and AI’ counts 118 AI systems that are used or developed in European countries in the field of justice.

In addition to technological progress, there has been considerable progress on regulation at the EU level. Namely, the Artificial Intelligence Act, which was adopted by the Council in May and progressively enters into force in August.

The benefits of using new AI technologies are obvious. AI tools allow for faster and more efficient processing of information and therefore have the potential to increase the efficiency of justice. Given that efficiency is rightly lauded as one of the core quality components of justice systems, any enhancement of it must be welcomed. In various ways, AI, and digitalisation more broadly, may help to reduce workload, case load, and bottle necks. AI tools can overcome obstacles that stand in the way of people’s ability to access justice – for example, by providing targeted and easy to access information about rights, obligations and available legal avenues.

AI is very powerful, as it can detect complex patterns. Through revealing certain patterns in human behaviour, AI tools – when used correctly, in compliance with the law – can also be useful in uncovering fundamental rights abuses where they exist in the justice sector – including, for example, potential bias in decision-making.

However, when it comes to the use of AI, it is indisputable that the protection of fundamental rights must be prioritised above efficiency, where that efficiency risks rights violations. We must not sacrifice the quality of justice processes for the sake of speed and technology. FRA’s report from 2020, entitled ‘Getting the future right’, highlights that most developers and deployers want to use AI for more efficient procedures. However, the question remains: at the same time as using AI for efficiency, why are we not also trying to use AI to design and implement better procedures and higher quality decisions – meaning ones that are also fundamental rights compliant?

Here, we have to also acknowledge how AI can negatively impact on several rights, if used without due consideration and control for fundamental rights compliance. As FRA has repeatedly highlighted through its research on AI ‘use cases’:

The use of AI can interfere strongly with people’s right to a private life and data protection. In this regard, the use of AI cannot deviate from applying the EU’s high standards on data protection. But, reassuringly, as FRA’s research has shown, this is an area that developers and deployers are most familiar with when it comes to rights compliance. Still, more guidance on the practical application of data protection when using AI is needed.

The use of AI can interfere with people’s right to non-discrimination. FRA’s report on ‘Bias in algorithms’ has shown that AI tools can easily pick up information on people’s sex or ethnic origin and can overstate this factor, which can in turn lead to discrimination. If not effectively controlled for, the use of AI systems can undermine equality in the application of the law.

The use of unchecked AI also potentially risks interfering with the right to a fair trial and an effective remedy. Without the fully transparent use of AI, and accountable systems to control its use, people’s faith in the accountability of the justice system can be undermined. FRA has repeatedly highlighted that

  • People need to know when AI is used for decisions that affect them.
  • People need to know where and how to complain about AI-supported decisions.
  • And those using AI need to be in a position to explain its use – for accused, victims and witnesses – to ensure that fair trial principles are met, and to ensure good administration.

While I have noted some examples of risks to fundamental rights, it should be remembered that the use of AI may also interfere with many other rights, depending on the context in which it is used.

Therefore, it must be remembered and repeated that the rights and duties enshrined in the Charter of Fundamental Rights and the European Convention on Human Rights do not bend to technological advances. We cannot allow for any use of AI that would dilute the hard-fought-for milestones we have achieved in the areas of democracy and rule of law.

These instruments, alongside the AI Act, also provide us with a roadmap for navigating through technological progress and unknowns. FRA’s experience with working on real AI ‘use cases’ has shown that we have to repeat the basic messages and continue to encourage a fundamental rights approach in the development and use of AI tools.

So, how do we best protect fundamental rights when using new technologies, such as AI?

Let me highlight two areas: First - supporting the effective implementation of the law, including the available fundamental rights safeguards in practice, and Second - sharing experiences and working together.

First - When it comes to implementing the law, we must look to the AI Act. It prohibits certain uses of AI and includes rules for high-risk AI systems in predefined areas – such as biometrics, employment, access to essential private and public services, law enforcement, migration and the administration of justice. The latter area, administration of justice, includes AI systems intended to be used by a judicial authority for interpreting facts and the law, applying the law or in alternative dispute resolution. Importantly, the recitals of the Act mention that: “final decision-making [by judges] must remain a human-driven activity.” The AI Act also introduces a new option to complain about possible infringements of the Act to the relevant ‘market surveillance authority’. However, regardless of how good the rules seem on paper, the real test is how they work in practice. Access to effective complaint procedures is an important part of that.                               

Amongst its projects in this area, FRA is currently focusing on how we can support the effective implementation of the AI Act in view of protecting fundamental rights:

FRA will be one of the five permanent members of the Advisory Forum under the AI Act, which will, together with rotating members from business, academia and civil society, advise the Commission and the AI Board.

FRA is currently running several projects with those developing and using AI, including in high-risk areas, which will inform the implementing guidance of the AI Act.

FRA’s previous research has shown that those developing and using AI often do not know how to implement laws in practice, particularly with respect to fundamental rights impact. Good implementing guidance will be of major importance for the Act’s success, and the Agency will support the AI Office in DG CNECT with its findings.

Second - When it comes to sharing experiences and working together, I very much support the idea presented by the EU Presidency about coming together to share expertise and guidance on fundamental rights compliant uses of AI in the justice sector. Through effective cooperation on AI, we can ensure that we have the necessary tools available to embrace and use AI that is fundamental rights compliant.

FRA stands ready to support you in this endeavour. We are currently examining digital tools and systems, including those based on AI, that are already in use in civil, criminal or administrative justice in EU Member States. We will be carrying out interviews with selected technical experts and justice practitioners in different countries – such as judges, prosecutors, lawyers and law enforcement officers – who have direct experience in developing, deploying or using  digital tools, including AI systems. We are particularly interested in how this can affect individuals - for example as a complainant, defendant or victim - in their interaction with justice systems. On the basis of our findings, we will develop practical guidance on the fundamental rights implications - both positive and negative - that arise when Member States employ digital technologies in the administration and delivery of justice.

To conclude - fundamental rights compliant use of AI is not hampering innovation. To the contrary, it is to our competitive advantage in developing better and more sustainable tools that support the efficiency of work and ensure justice for all.

Thank you.

See also