FRIA Model Guide and Use Cases
FRIA Model Guide and Use Cases
2
Coordinators:
Alessandro Mantelero and Joana Marí
Authors:
Alessandro Mantelero (Polytechnic University of Turin)
Cristina Guzmán (Polytechnic University of Catalonia – BarcelonaTech (UPC))
Esther García (CaixaBank, S.A)
Ruben Ortiz (University of Barcelona)
M. Ascensión Moro (Sant Feliu de Llobregat City Council)
Participants:
Albert Portugal (Consortium of University Services of Catalonia)
Albert Serra (Catalan Data Protection Authority)
Alessandro Mantelero (Polytechnic University of Turin)
Cristina Guzmán (Polytechnic University of Catalonia – BarcelonaTech (UPC))
Esther García (CaixaBank, S.A)
Joana Judas (Department of Research and Universities. Generalitat de Catalunya)
Joana Marí (Catalan Data Protection Authority)
Jordi Escolar (University Quality Assurance Agency)
M. Ascensión Moro (Sant Feliu de Llobregat City Council)
Marc Vives (Pompeu Fabra University)
Maria José Campo (TIC Health and Social Foundation)
Mariona Perramon (Consortium of University Services of Catalonia)
Olga Rierola (Catalan Data Protection Authority)
Patricia Lozano (Open University of Catalonia)
Ruben Ortiz (University of Barcelona)
Sara Hernández (TIC Health and Social Foundation)
Barcelona, 2025
The content of this document is the property of the Catalan Data Protection Authority and is subject to the Creative Commons BY-
NC-ND license, https://creativecommons.org/licenses/by-nc-nd/4.0/.
The authorship of the work will be recognized by the inclusion of the following mention: Work owned by the Catalan Data Protection
Authority. Licensed under the CC BY-NC-ND license. Notice: When reusing or distributing the work, the terms of the license for
this work must be clearly stated.
Disclaimer: The opinions expressed in this document are the responsibility of their authors and do not necessarily reflect the
official opinion of the APDCAT. The Catalan Data Protection Authority, the authors and the members of the working group do no t
accept responsibility for the possible consequences caused to natural or legal persons who act or cease to act as a result of any
information contained in this document.
3
Presentation
In its Strategic Plan 2023-2028, the Catalan Data Protection Authority (APDCAT) has set as a one of its
main pillars the objective of promoting the development of training aimed at institutions. In this view, one
of the main lines of action was to "promote the creation and strengthening of the Data Protection Officers’
Learning Community".
On 15 July 2023, the first Data Protection Officers (DPO) Network was launched (available at
https://www.dpdenxarxa.cat/), a pioneering initiative in Catalonia and in Spain. The platform was created
with the aim of contributing to the development of the culture of privacy in Catalonia, driven by Data
Protection Officers (DPOs) who ensure compliance with data protection regulations, promote
cooperation and collaboration among themselves, and share knowledge and expertise.
This platform brings together the DPOs of the more than 1,700 entities that are part of the APDCAT's
scope of action, which includes public administrations such as the Generalitat de Catalunya, city
councils, public and private universities, and professional associations, among others. It is also open to
DPOs of public and private entities that provide services to the Catalan public sector as data processors,
as well as to all DPOs of entities based in Catalonia.
The Network, which currently includes a large number of DPOs in Catalonia, pursues the following main
objectives:
- To be an institution closer to DPOs and organisations.
- To be a space for the exchange of ideas, experiences and knowledge.
- To promote the figure of the DPO as a key player in compliance.
- To identify and promote best practices.
- To create and disseminate a model of relationships and cooperation compatible with useful
and effective supervision.
- To create an environment of open interaction and collaboration services for resource
generation, learning and knowledge management.
- To create a reference space for the DPOs in Catalonia and Europe.
As a result of this space for the exchange of ideas, experiences and knowledge, as well as the creation
of an environment of interaction and open collaboration for the generation of resources, learning and
knowledge management, the Agora section of the Network proposed to set up a working group entitled
"Methodological guidance. Impact assessment. Rights and freedoms"
This group, coordinated by Dr. Alessandro Mantelero 1 and Ms. Joana Marí,2 conducted its work from
February to December 2024 and included the following members of the Network:
- Albert Portugal (Consortium of University Services of Catalonia)
- Albert Serra (Catalan Data Protection Authority)
- Cristina Guzmán (Polytechnic University of Catalonia – BarcelonaTech (UPC))
- Esther Garcia (Caixabank, S.A.)
- Joana Judas (Department of Research and Universities. Generalitat de Catalunya)
1 Associate Professor of Private Law and Law and Technology at the Polytechnic University of Turin, Jean Monnet Chair of
Mediterranean Digital Societies and Law.
2 Data Protection Officer and Head of Strategic Projects of the Catalan Data Protection Authority.
4
- Jordi Escolar (University Quality Assurance Agency)
- M. Ascensión Moro (Sant Feliu de Llobregat City Council)
- Marc Vives (Pompeu Fabra University)
- Maria José Campo (TIC Health and Social Foundation)
- Mariona Perramon (Consortium of University Services of Catalonia)
- Olga Rierola (Catalan Data Protection Authority)
- Patricia Lozano (Open University of Catalonia)
- Ruben Ortiz (University of Barcelona)
- Sara Hernández (TIC Health and Social Foundation)
The objective of the Group was to develop a new methodology for Fundamental Rights Impact
Assessments (FRIA) in the use of Artificial Intelligence systems in line with the regulatory framework
established by Regulation (EU) 2024/1689 of the European Parliament and of the Council, of 13 June
2024 (Artificial Intelligence Act) and with the aim of distinguishing this methodology from the Data
Protection Impact Assessments (DPIA) required by Regulation (EU) no. 2016/679 of the European
Parliament and of the Council of 27 April 2016 (General Data Protection Regulation).
This document presents the main results of this Working Group and is divided into two parts, the first
describing the FRIA methodology and the second focusing on its concrete application to specific cases.
The purpose of this work is to provide a useful and practical tool for entities that design, develop or use
AI systems and models and, in particular, for people in charge of carrying out the fundamental rights
impact assessment.
In short, preventing of the violation of fundamental rights and freedoms is a common task in respect of
which the Catalan Data Protection Authority must play a key role.
5
Part I – The FRIA and the FRIA methodology
1. Introduction
This publication presents the results of a project led by the Catalan Data Protection Authority (ADPCAT)
with the aim of providing AI operators, both providers and deployers, with an effective tool to develop
trustworthy and human-centred AI solutions. In this view, as demonstrated by the AI Act and other
national and international initiatives, such as the Council of Europe’s Framework Convention on Artificial
Intelligence and Human Rights, Democracy and the rule of Law, the design and development of AI
solutions that respect fundamental/human rights 3 is at the core of truly human-centred AI.
In the light of the above, the AI Act establishes the “ensuring a high level of protection of health, safety,
fundamental rights enshrined in the Charter” as a one of the main objective of this regulation (Art. 1, AI
Act) and, in line with the risk-based approach adopted by the EU legislator, includes the assessment of
the impact of the AI on fundamental rights 4 in all the risk management procedures established by the
Act. From conformity assessment to the specific fundamental rights impact assessment under Article 27
of the AI Act, including a specific provision for general-purpose AI models with systemic risk, the impact
on fundamental rights must be taken into account in the design, development and deployment
of AI systems and models.
Against this background, the provisions on how to conduct this assessment in the AI Act, but also
in the Council of Europe Framework Convention, give only a limited guidance to those who have
to carry out this assessment. On the other hand, the proposed models and the initial debate in the
literature show several shortcomings from a methodological point of view [MANTELERO, 2024]. In line
with an empirical approach to law, it is therefore necessary to move from the abstract elaboration,
in law and in the legal debate, to concrete applications in order to test the feasibility and
effectiveness of the models for carrying out the impact assessment on fundamental rights in the context
of AI.
The Catalan project is the first initiative based on the concrete implementation of a FRIA methodology
applied to real cases and based on an active interaction with public and private entities that apply AI
solutions in their business and activities. The results of this empirical approach are crucial for the
effective implementation of the AI Act, as they show that it is possible to streamline the requirements
of the Act and translate them effectively into a risk analysis and risk management process that is
consistent with both general risk theory and the fundamental rights framework.
In addition, the empirical evidence provided by this publication can contribute to the EU and
international debate on the model template for fundamental/human rights impact assessment by
providing evidence on crucial issues such as (i) the relevant variables to be considered; (ii) the
methodology to assess them and create risk indices; (iii) the role that standard questionnaires can play
in this exercise and their limitations; (iv) the role of expert-based evaluation in this assessment.
The FRIA model applied in our use cases [MANTELERO, 2024; MANTELERO-ESPOSITO 2021], as well as
the use cases themselves, are made publicly available in order to contribute to the EU and international
debate on the protection of fundamental rights in the context of AI, and to serve as a source of
inspiration for the many entities around Europe and in non-EU countries that wish to adopt a
3 See also European Union Agency for Fundamental Rights, https://fra.europa.eu/en/content/what-are-fundamental-rights (“the
term fundamental rights is used in a constitutional context whereas the term ‘human rights’ is used in international law. The two
terms refer largely to the same substance as can be seen, for instance, by the many similarities between the Charter of
Fundamental Rights of the EU and the European Convention on Human Rights.”).
4 Here and in the following pages, references to fundamental rights (or simply rights) include both fundamental rights and
6
fundamental rights-based approach to AI, but do not have a tested reference model and concrete
cases against which to compare their experiences.
With this objective in mind, the following sections will briefly discuss the role of the FRIA in AI regulation,
its interaction with data protection regulation, the model template applied in the use cases, and the case
selection criteria and areas covered.
5 For a categorisation of AI-related risks, see also UNITED NATIONS, AI ADVISORY BOARD 2024; SLATTERY ET AL. 2024.
7
prevention/mitigation measures, are the different areas of the FRIA to be developed in order to deal with
the aforementioned issues.
In this line, Article 27(1) of the AI Act on FRIA considers (i) the context of use and the categories of
actors exposed to the risk (Art. 27 (1)(a), (b) and (c)), (ii) the potential prejudice to fundamental rights
(art. 27(1)(d)), and (iii) the prevention/mitigation measures to be adopted (art. 27(i)(e) and (f)). This
division is reflected in the three phases of the FRIA methodology adopted in this study, namely (i)
planning, scope definition and risk identification, (ii) risk analysis, (iii) risk mitigation and management.
The first phase (planning, scope definition and risk identification) comprises the description of the
AI system and its context of use, in relation to the potential intrinsic (related to the system itself) and
extrinsic risks (related to the interaction between the system and the socio-technical environment in
which it is implemented). This involves a description of the process in which the high-risk AI system will
be used (art. 27(1) (a)), as well as the period of time within which each system is to be used and the
associated frequency (art. 27(1) (b)). Once these elements have been defined, it is possible to make an
initial identification of the areas of impact of the AI system in terms of the categories of individuals and
groups concerned (art. 27(1) (c)) and the related rights that may be at stake.
This phase also serves as a preliminary assessment in order to exclude from the FRIA those cases
where it is clear that there is no risk of prejudice to the persons concerned. On the other hand, if a
potential harm is identified, it should be examined in the second phase (risk analysis), which goes
beyond a general identification of potential areas of impact and estimates the level of impact for each
right or freedom.
There are several reasons why individualised estimation of the level of impact is essential. First, it is a
feature of all impact assessments, from environmental to cybersecurity assessments: there can be no
proper risk assessment without risk estimation. Second, estimation is the basis for the third phase,
which is the definition of risk prevention/mitigation measures (risk mitigation and management):
if the impact has not been estimated, it is not possible to identify the appropriate and effective measures
to eliminate/reduce the initial impact. Third, the estimation is, therefore, functional to the implementation
of the key principle of accountability: only by defining the level of risk before and after the adoption of
the prevention/mitigation measures is it possible to demonstrate that the risk has been specifically and
effectively addressed. For all these reasons, Article 27(1) (d) and (f) play a central role and form the
basis for the development of the risk assessment methodology.
8
Against this background, if AI deployers fail to comply with the FRIA obligations under the AI Act, or if
these obligations are not properly enforced by the competent authorities, Data Protection Authorities
(DPAs) may in future play an active role in enforcing the FRIA of AI systems through the
provisions of Article 35 of the GDPR, to the extent that the GDPR is applicable (this is the case in
many situations where AI impacts individuals and groups, given the broad notion of personal data and
data processing under the GPDR and the role of data in AI development, deployment and use).
Given all these different aspects of the interplay between the FRIA in the AI Act and Article 35 GDPR,
and also given the experience of DPAs in dealing with fundamental rights issues [MANTELERO-ESPOSITO
2021], an active role of these Authorities in providing guidance on the FRIA in the context of
personal data-driven AI systems is appropriate and due.
6 See, for example, ISO, Risk management. Guidelines. ISO 31000. https://www.iso.org/standard/65694.html, which identifies
the following three main phases, combined with three complementary tasks (recording & reporting; monitoring & review;
communication & consultation): (i) scope, context and criteria; (ii) risk assessment (risk identification, risk analysis, risk
evaluation); (iii) risk treatment. The same approach can also be seen in UNDP 2024.
9
(i) a planning and scoping phase, focusing on the main characteristics of the product/service
and the context in which it will be placed;
(ii) a data collection and risk analysis phase, identifying potential risks and assessing their
potential impact on fundamental rights; and
(iii) a risk management phase, in which appropriate measures to prevent or mitigate these risks
are adopted, tested and monitored for effectiveness.
In terms of structure, in accordance with Art. 27 of the AI Act and risk assessment methodologies, the
FRIA is a contextual assessment focused on the specific AI solution being deployed and not a
technology assessment centred on AI technologies in general and their various potential uses: it looks
at a specific AI application and its context of use. In addition, the FRIA is also characterised by an ex
ante approach, which makes it a tool for a fundamental rights-oriented AI design, adopting the by-
design approach already known in data protection.
Finally, the FRIA has a circular iterative structure: like all risk assessments of situations that may
evolve over time, it is not a one-off prior assessment. The main phases of risk management
(planning/scoping, risk analysis, risk prevention /mitigation) are therefore repeated according to a
circular iterative structure, as technological, societal and contextual changes affect some of the relevant
elements of a previous assessment (Art. 27(2), AI Act).
10
Management Phase (see Section 4.3): (i) the identification of appropriate measures to prevent or
mitigate the risk, (ii) the implementation of such measures, and (iii) the monitoring of the functioning of
the AI system in order to revise the assessment and the measures adopted.
Analysis of the
Monitoring
level of impact
Identification of
Implementation appropriate
measures
With regard to the Data Collection and Risk Analysis Phase, given the nature of the assessment, the
data will relate to the different aspects of the rights potentially at stake, including information on the
context of use and the individuals and groups potentially affected. Despite the variety of these elements
and the specific nature of each right and freedom, the analysis phase can be based on key common
parameters.
These parameters make it possible to operationalise an abstract concept such as impact on rights so
that it can be assessed in a way that also makes it easier to (i) compare the level of impact on different
rights in order to prioritise the risk prevention/mitigation, and (ii) understand how the impact on an
individual right may change if some of the system or contextual elements vary.
7 See Article 3(2) of the AI Act, which states that “ ‘risk’ means the combination of the probability of an occurrence of harm and
the severity of that harm”, and Article 25(1) of the GDPR, which refers to “the risks of varying likelihood and severity for rights
and freedoms of natural person […]”.
11
changes are made to it. These ordinal variables can therefore be used to ‘measure’ the impact on a
range-based quantification of risk (low, medium, high, very high).
However, the fundamental rights theory does not allow for the creation of a composite index, as is
common in risk assessment, where all potential impacts are combined to create an overall impact index.
This approach conflicts with the legal approach to fundamental rights where each right must be
considered independently, in terms of its potential prejudice, and the fact that one right is less affected
than another cannot lead to any form of compensation.
As the use cases discussed in Section 5 show, it is possible to assess the impact on the different rights
involved, but not to say that a given AI system has an overall impact on rights that is considered as low,
medium or high. The only possible interaction between different interests is through the balancing test
in the presence of conflicting rights, but this test follows the assessment of the level of impact on each
right. The balancing test does not relate to the level of risk to the affected rights, but to the overriding
importance of one interest over another. It should therefore be considered as an external factor, to be
taken into account only after the impact on individual rights has been assessed, and which may influence
the results of the impact assessment by making an impact on certain rights acceptable because of a
prevailing competing interest. 8
Based on these considerations, a FRIA model will define a risk index for each potentially impacted right
using the dimensions of likelihood and severity. The likelihood is understood as a combination of (i) the
probability of adverse outcomes and (ii) the exposure. The first variable relates to the probability that
adverse consequences of a given risk will occur and the second variable relates to the extent to which
people potentially at risk could be affected. As far as exposure is concerned, it should be noted that the
focus is on those potentially exposed to the use of the AI system (the identified population) and not on
the population as a whole.
The severity of the expected consequences is based on two variables: (i) the gravity of the prejudice in
the exercise of rights and freedoms (gravity), 9 based on their attributes, including taking into account
group-specific impact, vulnerability and dependency situations; and (ii) the effort to overcome it and to
reverse the adverse effects (effort).
Both likelihood and severity need to be assessed on a contextual basis, and the involvement of relevant
stakeholders can be of help. As is common in risk assessment, the estimation of likelihood is based both
on previous cases, looking at comparable situations, and the use of analytical and simulation
techniques, based on possible scenarios of use. The same approaches are also used to estimate the
level of severity, but in this case with greater emphasis on legal analysis regarding the gravity of
prejudice, which should be assessed with reference to the case law on fundamental rights and the
relevant legal framework.
On the basis of the likelihood and severity values derived from the above variables, a risk index is
determined, which indicates the overall impact for each of the rights and freedoms considered. 10 It is
worth noting that, these results must be combined with any elements that justify a limitation of some
rights from a legal perspective, such as the mandatory nature of certain impacting characteristics: in this
case, the potential risk must be considered acceptable to the extent that the AI system complies with
the given legal requirements.
8 See e.g., Part II, Use Case 1, below in the case of the development and use of an advanced learning analytics platform, where
some impact on privacy and data protection rights is considered acceptable in view of the benefits for the right to education.
9 The gravity/seriousness of prejudice to a fundamental/human right is usually assessed according to the following
three elements: (i) its intensity, (ii) the consequences of the violation, and (iii) its duration , where the intensity of the
violation is related to the importance of the protected legal interest violated. See also EUROPEAN COURT OF HUMAN RIGHTS 2022.
10 See the following section for the methodology used to combine the different variables and create the indices.
12
4.2.2 Variables and construction of the impact index
In many risk-based impact assessment models and standards, risk indices are constructed using
matrices because they are relatively easy to use and explain.11 As a risk matrix is a graph that combines
two dimensions using colours to reflect different levels of risk, they are useful for assessing indices
generated by different variables. For this reason, they can be used in the FRIA to define the level of
impact on each right concerned.
The methodology proposed here uses a risk index for each potentially impacted right, based on a matrix
combining two dimensions (likelihood and severity). Each of these dimensions results from the
combination of two pairs of variables, also constructed using matrices: the probability of adverse
consequences, and exposure, for likelihood; the gravity of prejudice, and the effort to overcome it and
to reverse adverse effects, for severity. There is no single risk matrix model to be used in risk
assessment; practice in this field shows a variety of models. The most common are 3x3, 4x4, 5x5, 5x4
and 6x4 matrices, where the pairs of numbers indicate the number of ranges of the two scales defining
the dimension under consideration. As the matrix refers to two independent variables, they can be
evaluated according to scales that may differ in number of ranges, for example a 6x4 scale where six
different ranges are provided for one variable and only four for the other.
The 4x4 matrix may be the most appropriate in the context of FRIA, as it reduces the risk of average
positioning, gives more attention to the high and very high levels in a way that is consistent with the
focus on high risk in the current regulatory approach to AI, and does not excessively fragment the lower
part of the scale, which is less relevant due to the aforementioned focus.
In matrices, descriptive labels are used for the different combinations of levels in the colour scale, as
follows in this example of a severity matrix:
Gravity
Severity
Low Medium High Very high
13
4.3 The Risk Management Phase
Following the risk analysis, which has defined the level of impact of the AI solution on potentially affected
rights, it is necessary to manage the identified risks by adopting appropriate measures. 12 The third phase
of the FRIA is therefore articulated in three steps, as follows:
(i) the identification of appropriate measures to prevent or mitigate the risk, taking into account
their impact on the risk level according to a context-specific scenario analysis;
(iii) the monitoring of the functioning of the AI system in order to revise the assessment and the
adopted measures should technological, societal and contextual changes affect the level of risk
or the effectiveness of the adopted measures.
As the FRIA is not a final check of an AI solution, but a design tool to guide the development and
deployment of AI towards a fundamental rights-oriented approach, monitoring the functioning of the AI
system can also be part of the pre-market phase in which different design solutions are tested and
implemented in order to select the most appropriate one. In line with the circular approach to risk
assessment and AI design, it is therefore possible that several series of risk assessments,
implementation of mitigation measures and re-assessments may take place until the final version of the
AI product/service results in a level of residual risk that is satisfactory in terms of acceptability and can
be placed on the market or put into service.
In addition, changes in the technological and societal scenario or in the specific context of use
may occur after the AI tool has been placed on the market or put into service. These may have an
impact on the level of risk previously assessed with respect to the rights concerned, as well as raise
new concerns with respect to other rights. In such cases, the AI solutions adopted will be re-assessed
and appropriate measures taken. 13
In line with these observations, the FRIA model template (see Section 5) includes a matrix showing the
impact of the risk prevention/mitigation measures adopted on the risk levels identified in the risk analysis
phase and the resulting residual risk.
14
Planning and Scoping Questionnaire
Section D Who are the main groups or communities potentially affected by the AI
system, including its development?
Stakeholder engagement
and due diligence Which stakeholders should be involved in addition to the individuals or
groups potentially affected by the AI system (e.g. civil society and
international organisations, experts, industry associations, journalists)?
15
Are there other duty bearers that should be involved in addition to AI
providers and deployers (e.g. national authorities, government
agencies)?
Risk matrices
Tab. 1 Probability
Tab. 2 Exposure
Low Few or very few of the identified population of rights holders are potentially affected
16
Tab. 3 Likelihood
Probability
Likelihood
Low Medium High Very high
Affected individuals and groups may encounter only minor prejudices in the
Low
exercise of their rights and freedoms.
Medium Affected individuals and groups may encounter significant prejudices.
High Affected individuals and groups may encounter serious prejudices.
Affected individuals and groups may encounter serious or even irreversible
Very high
prejudices.
17
Tab. 6 Severity
Gravity
Severity
Low Medium High Very high
Severity
Low
Likelihood Medium
High
Very high
18
Tab. 2A Risk management (I)
19
With regard to the selection of the cases, it is worth noting that this is an ongoing project and the
cases presented in this report are the first to have been discussed and in which the FRIA template
has been applied. Other cases are under evaluation and will be published in the future on the
website “DPD en xarxa” (https://www.dpdenxarxa.cat/) and on the official website of the Catalan
Data Protection Authority (https://www.apdcat.cat). Moreover, some cases in which the FRIA
template has been applied, with a relevant impact on the design of AI solutions, have not been
included in this report for reasons of confidentiality, but have been useful for all participants to
better elaborate the practice of the FRIA template.
In terms of the areas covered, the use cases relate to four of the key areas listed in Annex III of
the AI Act, namely education (assessment of learning outcomes and prediction of student
dropout), workers’ management (decision support for human resource management), access to
healthcare (cancer treatment based on medical imaging), and welfare services (voice assistant
for elderly people), which also represent the areas where AI solutions are increasingly being used,
with the greatest impact on individuals and groups. In this sense, the nature of the use cases
discussed will also make them useful to many other public and private entities in other countries
interested in designing AI systems/models that are compliant with fundamental rights in these
core areas.
In line with the aim of this project, the use cases are presented as they were developed by the
participants, rather than as best practice or standardised cases. The project was designed to test
the effectiveness of the model template and associated methodology. In line with DPIA
experience, we have given participants the freedom to develop the different parts of the template
according to their approach, so that some analyses are more extensive and others more concise.
However, the core elements (the questions, the matrices, the assessment methodology) remain
the same.
The main idea is that, in this report, it is important to reflect on the exercise carried out in order to
show the results obtained, and not to present the FRIAs carried out as fictional, perfect cases.
The FRIA has been and will be implemented by a variety of actors, in some cases in more detail,
in others with some limitations, but to the extent that it contributes to effective analysis and
prevention/mitigation of the impact on fundamental rights, it will have achieved its main objective.
17
References
All the websites cited in this document were accessed between September and December
2024.
APDCAT. 2024. Avaluació d’impacte relativa a la protección de dades,
https://apdcat.gencat.cat/web/.content/03-
documentacio/Reglament_general_de_proteccio_de_dades/documents/Guia-Practica-
avaluacio-impacte-proteccio-de-dades-2019.pdf.
APDCAT. 2020. Artificial Intelligence. Automated Decisions-making in Catalonia, Informe-IA-
Angles-Report-Final.pdf
Article 29 Data Protection Working Party. 2017. Guidelines on Data Protection Impact
Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the
purposes of Regulation 2016/679, WP 248 rev.01,
https://ec.europa.eu/newsroom/article29/items/611236.
CNIL. 2018. Privacy Impact Assessment (PIA). Templates, https://www.cnil.fr/en/privacy-impact-
assessment-pia.
European Court of Human Rights.2022. Guide on Article 8 of the European Convention on
Human Rights. Right to respect for private and family life, home and correspondence,
https://ks.echr.coe.int/web/echr-ks/article-8.
Mantelero, A. 2024. The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots,
legal obligations and key elements for a model template. 54 Computer Law & Security Review
106020, https://doi.org/10.1016/j.clsr.2024.106020 (open access).
Mantelero, A. and Esposito, M.S. 2021. An evidence-based methodology for human rights
impact assessment (HRIA) in the development of AI data-intensive systems. 41 Computer Law
& Security Review 105561, https://doi.org/10.1016/j.clsr.2021.105561 (open access).
Slattery, P. et al. 2024. A systematic evidence review and common frame of reference for the
risks from artificial intelligence. http://doi.org/10.13140/RG.2.2.28850.00968 and
https://airisk.mit.edu/.
United Nations, AI Advisory Board. 2024. Governing AI for Humanity.
https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf.
UNDP. 2024. Visual guide. Business Process for Risk Management,
https://popp.undp.org/sites/g/files/zskgke421/files/2024-
05/Risk_Management_Full_Visual_Guide_13.pdf.
18
Part II – Use cases
therefore important to be discerning about the data available, to structure it,
to extract the information it provides and to use it for the specific purpose we
want to achieve.
Use case 1: An advanced learning analytics
In order to identify where each student is at a given point in their studies and
platform to be able to predict what will happen next, the following data may be
relevant:
1. The context
- Historical data on student careers collected in previous years
The following use case relates to one of the major problems of the higher
- Data provided by the previous school
education system, namely the early drop-out of the 18-24 year old
- Data provided by the student himself/herself at the time of enrolment
population from education and training. This situation has given rise to
- Data collected during the course of his/her studies.
concern at European level and is reflected in the European statistical data
available in the Eurostat database.14 This is why priority 1 of the strategic The information provided by these datasets can be used to identify patterns
framework for European cooperation in education and training for the in student performance. In addition to having a global view of the situation
European Education Area (EEA) and beyond (2021-2030),15 is “improving of the entire student population in the same programme and its future
quality, equity, inclusion and success for all in education and training”. development, it is also possible to have an individualised view of each
Although early school leaving has been reduced over the last decade, it student’s situation and to anticipate his/her future development. This
remains a challenge. In order to avoid limiting young people's access to information helps to promote strategies based on the student’s current
future socio-economic opportunities, particular attention needs to be paid to situation, offering personalised treatment and taking into account the needs
groups at risk of low educational attainment and early school leaving. of each student.
Higher education institutions need to promote educational strategies that The analysis of this data and its interpretation for improvement and progress
support successful completion of education and training pathways, reduce in the field of education falls under the umbrella of what is known as "learning
early drop-out rates, and address the causes of underachievement. It is analytics". At the 1st International Conference on Learning Analytics and
14 15
Access to Eurostat: https://ec.europa.eu/eurostat Council Resolution on a strategic framework for European cooperation in education and
training for the European Education Area and beyond (2021-2030): https://eur-
lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32021G0226(01)
19
Knowledge (LAK) in 2011, learning analytics was defined 16 as the harming their integration into the university system and, therefore, the
measurement, collection, analysis and reporting of data about learners and student population. However, there are still challenges to be addressed,
their contexts for the purposes of understanding and optimising learning and such as data bias and ethical dilemmas that may arise, as well as issues
the environments in which it takes place. related to AI development, barriers and resistance to the integration of AI
systems in some areas that may hinder or slow down the growth in these
Although learning analytics has long been researched and used to predict
areas. In our case, this could lead to the obsolescence of the university
students' academic success and risk of dropping out of school, the
system and the opposite of the desired effect, which could lead to a setback
emergence of new technologies that provide alternative analytical
in the teaching and training of young people.
techniques has highlighted the need to address the legal requirements
arising from recent regulations in order to make proper use of them. When the use of AI systems is being considered, it is necessary to analyse
and reflect on the different approaches and cases that we may encounter
Given the current situation in which we find ourselves, in which artificial
form the very begging, in order to take all the necessary precautions and to
intelligence systems (hereafter referred to as AI systems) are becoming a
establish the technical and organisational measures to build solid solutions
regular component of our daily tasks, it is essential that institutions identify
that guarantee the protection of young people, respecting human rights and
and understand their risks in order to prevent, minimise and manage them,
social values.
and make the best use of them as allies in the modernisation and digitisation
of the university system. The rapid increase in the use of AI systems in education is transforming the
way we teach and learn, with a direct impact on institutions, their staff and
Some of the difficulties identified, apart from the expertise required to
the students themselves. This change goes hand in hand with the need to
interpret the information obtained from the data, relate to the difficulty for
equip staff with new skills18 to deal with this new and evolving technological
institutions and their staff to adapt easily to new technologies in their
landscape, and therefore to be empowered to use the information provided
teaching approaches and methodologies.
by AI systems.
With the resources available in the field of education, it is a fact that higher
The use of AI in education raises fundamental questions, which is why the
education institutions will need to introduce AI systems into those processes
European Artificial Intelligence Act (AI Act) itself has identified as high-risk
where human intervention may be limited. We need to be aware of the
AI systems “AI systems intended to be used to evaluate learning outcomes,
changes that are occurring and that may occur, such as the advent of
including when those outcomes are used to steer the learning process of
generative AI17 in the student learning model. Thus, if we decide to use the
natural persons in educational and vocational training institutions at all
available AI systems, appropriate tools could be put in place in advance to
levels” (Annex III, section 3(b)).
protect the rights and freedoms of students from an early stage and to avoid
16 18
https://www.solaresearch.org/about/what-is-learning-analytics/ See UNESCO. 2024. AI competency framework for teachers,
17See the definition of Generative AI and how it works in UNESCO’s “Guidance for generative https://unesdoc.unesco.org/ark:/48223/pf0000391104.
AI in education and research”, 2024: https://unesdoc.unesco.org/ark:/48223/pf0000386693
20
Although there are sectors that are reluctant to use AI in their processes As you can see in the following sections, the use case that has been
because of what it might imply, it is undeniable that a good use of AI in presented and analysed has left out the use of automated decision
education seems to be enriching in terms of bringing learning closer to the algorithms (ADA). To learn more about the use cases of ADAs, see the report
new generations of students. “Artificial Intelligence. Automated Decisions in Catalonia 24” prepared by the
Catalan Data Protection Authority (APDCAT).
In the absence of integration with the educational institution's own
application, there are already learning analytics platforms on the market that
compile information obtained from student self-reported data in the
institution's system “Student Information System (SIS)” and from data 2. The project
generated by Learning Management Systems (LMS) to identify and/or The project aims to design and development of a new ‘Learning Analytics’
predict students at risk of dropping out. For example: ecosystem for the higher education system, with the creation of an advanced
- 19Assessment and Learning in Knowledge Spaces 20 (ALEKS) learning analytics platform using an AI system to assess learning outcomes
- DreamBox21 and predict the risk of students dropping out. in particular, the platform will
- Carnegie Learning 22 be used to:
- Smart Sparrow 23 - Manage data related to teaching and learning processes;
- The IntelliBoard - Develop dashboards that monitor, analyse and visually display both
Learning analytics, by anticipating the different scenarios that may arise, the results of all students as a whole and the individual results of
reduces the effort needed to lower the drop-out rate and facilitates the each student according to pre-defined indicators;
adoption of more effective policies and measures that focus on the root of - Design and modulate the methodology used;
the problem. The provision of appropriate and personalised tools for - Analyse students' academic performance;
education, tailored to the needs of individual students, can enable young - identify students at risk of dropping out.
people to continue their personal growth in education, which can then be The data used within the framework of the platform aims to detect early,
transferred to their professional lives. For this reason, considering the quickly and effectively the risk of students dropping out of their studies.
potential of AI systems to transform the current notion of education, the use Similarly, the ultimate purpose of using the data is to improve the learning
case that has been carried out has focused on the framework of learning process through psycho-pedagogical counselling of the student population.
analytics using a high-risk AI system based on a predictive algorithm.
19 23
Real application cases of The IntelliBoard platform: https://intelliboard.net/customers/ https://www.smartsparrow.com
20 24
https://www.aleks.com https://apdcat.gencat.cat/web/.content/03-
21 https://www.dreambox.com documentacio/intelligencia_artificial/documents/Informe-IA-es.pdf
22 https://www.carnegielearning.com
21
It should not be forgotten that, in relation to the academic training they the data requirements of the RIA, the datasets used to train the AI system
receive, students have the right to tutoring and counselling by the staff of the for student classification and profiling are anonymised and untraceable.
institution in order to have appropriate guidance for their learning process.
Although the following section deals with the concrete assessment of the
In order to provide students with personalised and timely information about use case, it is worth mentioning some of the different aspects that were
their learning, the following information is needed: considered, in order to better understand the context of the use case. First,
the use of certain special categories of personal data for this AI application,
1. Aggregated historical data generated in learning management
such as health-related data providing information on students with special
systems (LMS), as well as data collected from the institution's
internal databases on students from previous years who have educational needs and disabilities (SEND), was considered in the evaluation
passed or dropped out of higher education. of the project, but excluded in this case.
2. Data coming directly from the educational institutions attended by Second, changes were made to the dashboards and the original idea of
the student before entering the higher education system. using different dashboards depending on the data selected and its relevance
to those who could access it. A three-layered representation was proposed,
3. Data requested by the higher education institution during the based on different colours depending on the student's situation at any given
enrolment process and provided by the student (SIS). moment: green colour, indicating that the student has a probability of not
4. Data obtained from the student's performance during their careers dropping out of more than 80%; yellow colour, indicating a probability
(academic record information). between 20% and 60%; and red colour, indicating that the student has a
probability of dropping out of higher education of more than 50%. During the
5. Data relating to personal circumstances during the student’s career discussion of the use case, the following changes were made:
(e.g. if the student combines study with work, if the student has
received scholarships, if the student moves to another institution, - It was decided that access to the full range of information from the
etc.). dashboards should be restricted to tutors, so that they could be
aware of the predicted risk of students dropping out and act on it in
In order to provide a better context for the predictive algorithm, and thus accordance with their competences and the regulations of the
obtain more reliable results, the student population has to be characterised higher education system.
according to the data available at any given time. Before the start of their - The information that appeared on the teachers' dashboard was
university studies, students must be grouped according to the information restricted, limiting their access to aggregated data on the
provided by the secondary education institutions before the higher education performance of the whole student population within their classes.
entrance examination and by the students themselves at the time of The lack of access to individualised information about each student
enrolment. This initial profiling of the student population will provide a first avoids the side effect of unconsciously stigmatising the small group
insight into the potential performance of the student. Depending on the data of underperforming students from the outset.
obtained, the classification of students may vary according to the metrics - The visualisation of the dashboards by the students was also
and indicators defined from the historical data. To ensure compliance with removed in order to protect the mental and emotional health of the
22
students and not to provoke situations of stress and/or anxiety,
among others, due to the fact that they see a certain colour
indicating their academic performance.
The assessment also showed that there is information about events that
affect students’ performance (e.g. death of a close relative) that is not taken
into account by the system. It is therefore important to treat the potential risk
of dropping out on a case-by-case basis and to collect the related relevant
information that needs to be taken into account, not only at an individual
level, but also to consider its inclusion in the variables that feed into the
predictive AI system.
A Data Protection Impact Assessment (DPIA) was conducted to complement
the FRIA. Although the former assessment is not included in this document,
it led to changes in relation to data protection, applying the principles set out
in the General Data Protection Regulation (GDPR). These changes have
enriched the final version of the FRIA.
23
3. The FRIA
What are the main objectives of the AI system? a) To provide indicators of academic performance;
b) To predict the likelihood of dropping out of higher education;
c) Contribute to the improvement of the learning process.
In which countries will it be offered? Spain. It may be extended to foreign higher education institutions
(inside or outside the European Union) with which joint
degree/exchange agreements have been formalised.
24
What types of data are processed · Aggregated historical data on the student population from
(personal, non-personal, special categories)? previous
years.
25
Identification of duty bearers: who is involved in the Higher education institutions wishing to implement the AI system will
design, development and deployment of the AI be responsible for its design, delivery and development.
system? What is their role?
The management of the data will involve the staff of the institution.
Furthermore, the processing of the information provided by the data in
the tracking dashboards will be visualised differently according to the
user profile: (i) students, (ii) teaching staff, (iii) tutors and (iv)
managers.
Having analysed all the individual, civil, political, economic and social
rights set out in the Charter of Fundamental Rights of the European
Union, it has been concluded that the following rights are potentially
What fundamental rights are potentially affected:
affected by the use of the AI system? ☒ Human dignity (Article 1)
26
☒ Non-discrimination (Article 21)
What international/regional legal instruments for the The regulations concerning personal data protection and the
protection of human/fundamental rights have been university system, as well as those concerning the groups potentially
implemented at the operational level? affected (e.g. the Statute of University Students).
What are the most relevant fundamental rights The data protection supervisory authorities of the country/region
courts or bodies in the context of use? where the AI system is developed and used, as well as the competent
courts of that country/region.
27
What policies and procedures are in place to assess Not applicable (N/A).
the potential impact on fundamental rights,
including stakeholder participation?
Section C
Controls in place
Has an impact assessment been conducted, A personal data protection impact assessment (DPIA) has been
developed and implemented in relation to carried out.
specific issues (e.g. data protection) or certain
features of the system (e.g. use of biometrics)?
Are there other duty bearers that should be Data protection supervisory authority, Department of
involved in addition to AI providers Education/Universities; Spanish AI Supervisory Agency; AI
and deployers (e.g. national Commission of the Generalitat de Catalunya.
authorities, government agencies)?
Competent unit/body of the institution.
28
Have business partners, including service providers
No
(e.g. subcontractors for AI systems and datasets),
been involved in the assessment process?
Have the AI provider and AI deployer publicly Publication is not mandatory, but it is advisable to include a summary
communicated the potential impact of the AI system of the analysis conducted (both of the DPIA and the FRIA), in view of
on fundamental rights? the principle of transparency and trust in the IA system.
Have the AI provider and AI deployer provided
training on fundamental rights standards to
Not applicable (N/A).
management and procurement staff dealing with the
AI system?
29
Risk matrices
Tab. 1 Probability
Tab. 2 Exposure
Low Few or very few of the identified population of rights holders are potentially affected
30
Tab. 3 Likelihood
Probability
Low Medium High Very high
Low L L/M L/H L/ VH
Exposure
Likelihood
Low Medium High Very high
Low Affected individuals and groups may encounter only minor prejudices in the exercise of their rights and freedoms
Very high Affected individuals and groups may encounter serious or even irreversible prejudices
31
Tab. 5 Effort to overcome the prejudice and to reverse adverse effects
Low Suffered prejudice can be overcome without any problem (e.g. time spent amending information, annoyances, irritations, etc.)
Suffered prejudice can be overcome despite a few difficulties (e.g. extra costs, fear, lack of understanding, stress, minor physical
Medium ailments, etc.)
High Suffered prejudice can be overcome albeit with serious difficulties (e.g. economic loss, property damage, worsening of health, etc.)
Very high Suffered prejudice may not be overcome (e.g. long-term psychological or physical ailments, death, etc.)
Tab. 6 Severity
Gravity
Low Medium High Very high
Low L L/M L/H L/ VH
Medium M/L M M/H M VH
Effort
Severity
Low Medium High Very high
32
Tab. 1A Data collection and risk analysis
Likelihood Severity
Rights/freedoms Description of the
potentially affected impact Probability of
Exposure Likelihood Gravity Effort Severity
adverse outcomes
The algorithm takes [High] [Low] [Medium] [Medium]
into account some Although the
The likelihood of The The prejudices
parameters omission of
the risk occurring is exposure is suffered can be
generated by certain
high because the low as it overcome
historical data parameters may
information derived concerns a despite some
collected within a only concern
from the available limited difficulties.
given socio- small groups,
data does not number of
economic context, the students In the case of
include all cases of
but not all those that affected may be students
information on all missing
could have a direct significantly requesting
potentially affected information.
influence on current biased. The AI teaching
groups.
academic algorithm may improvements,
Human dignity performance (e.g., [Medium] predict teaching staff [Medium]
requested teaching performance for and tutors will
improvements/adapt this small group be informed in
ations; people who that does not advance.
do not identify with a reflect their As for the other
particular gender, situation. omitted
access to new parameters and
technologies, etc.). the associated
negative impact
on minority
groups, they can
be taken into
account when
33
improving the AI
system.
Constant monitoring [High] [Very high] [Low] [Low]
of academic
There is a high The The students Higher
performance; impact
probability of the exposure is concerned may education
on family privacy;
risk occurring. very high, as encounter minor institution staff
impact on ‘decisional
Although the it would prejudices in the have duties and
privacy’.
different affect all exercise of their obligations to
procedures in place students. rights and safeguard the
in the relevant freedoms, since rights of
institution already the information students within
process the is used within the framework
Respect for private different information [Very high] the framework of of their
separately, the fact the institution's functions. The [Low]
and family life
that certain educational institution must
information is now functions. also train its
collected together staff in this area
for the purposes of so that they are
this project have aware of the
an impact on the applicable
control of this regulations and
information. can act in the
different
situations they
may face.
The AI system [Medium] There is a [Very high] [Medium] [Medium]
collects large-scale risk of inaccurate The Applying the
[High] Inaccurate
Protection of data and uses new profiling and exposure is GDPR,
profiling [Medium]
personal data technologies. prediction. A data very high, as appropriate
negatively
protection impact it would organisational
It also profiles impacts on the
measures must
students to assess accurate
34
and predict their risk assessment is affect all representation of be in place and
of dropping out. required. students. student protect students’
performance rights in relation
and expected to data
outcomes. processing, but
the way profiles
Consideration
are generated
has been given
and used may
to whether this is
require some
a case where
changes in the
students also
AI systems
have the right
design (e.g.
not to be
fine-tuning) and
profiled.
use.
Given that the AI [Medium] The [Very high] [Medium] [Medium]
system compares likelihood is The
The The
historical data, medium given the exposure is
classification of classification of
obtains data from limited weight of the very high, as
students may be students is not
other institutions and variables in the the impact
biased and static, so the
collects enrolment consideration of the would
provide initial data will
data, there may be predictive model of potentially
misleading not place them
historical the risk of dropout. affect all
information on in a particular
discriminatory biases students to
[High] early indicators cluster, but may [Medium]
Non-discrimination that may be whom the
of drop-out risk change as they
perpetuated and algorithm
resulting in progress
amplified; failure to would be
unjustified through their
take into account applied.
unequal studies.
certain factors or
treatment.
variables that may be
relevant; predictive
nature of the
evaluation.
35
Tab. 7 Overall risk impact
Severity
Low Medium High Very high
Low
Likelihood Medium
High
Very high
36
Tab. 2A Risk management (I)
Rights/freedoms Overall
Likelihood Severity Impact prevention/mitigation measures
affected impact
· Use of predictive modelling as a decision support tool and rather than an automated decision
making tool; limited use of the results provided by the AI system.
Human dignity [Medium] [Medium] [Medium]
· Not providing students with drop-out risk rates.
· Provide the institution's staff with guidelines for the use of the AI system (usage policy).
· Design the predictive model in a way that ensures control of the data at all times.
· Limit access to individual profiles. Students should not be able to view other students’ profiles.
Respect for
private and [Very high] [Low] [Medium] · The predictive tool must not take into account the interactions and communications that
family life students have with the teaching staff or with each other.
· The tool should be used as a support tool for the adoption of educational measures and not
as an automated decision making tool.
Protection of
[High] [Medium] [Medium] · Restrict access to data: full access to tutors and only aggregated data to teachers.
personal data
· Periodically check that the data entered into the databases does not generate discriminatory
profiles.
Non- · Periodically revise the initial profiling criteria as new data is added to the database, so that
[High] [Medium] [Medium]
discrimination new data can mitigate potential biases.
· Periodically check that the prediction model is not discriminatory and that the AI design is
sensitive to discrimination and potential bias.
37
Tab. 3A Risk management (II)
Being part of this working group, set up by the Catalan Data Protection - The need for fundamental rights training for all actors involved.
Authority (APDCAT), has made it possible to work with other people from - The need to avoid checklists for fundamental rights compliance,
other sectors. This has been a great improvement in the analysis of the use as they cannot go into depth on the different aspects of
fundamental rights.
case presented, as it has brought different perspectives and sensitivities to
- The need to raise awareness of the impact of AI systems in
the evaluation of each of the fundamental rights.
education.
Conducting the FRIA highlighted the importance of adopting a - The need to understand the definition of ‘AI system’ in the AI Act
comprehensive approach to risk identification, covering all fundamental (Article 3.1) and the implications of AI systems that are
rights and including mitigation measures. One of the most challenging considered high risk in Annex III.
aspects has been the assessment of residual risks, as determining the - The need to adopt an evaluation by default and by design.
resulting risk after the envisaged mitigation measures makes scenario - The need to raise awareness among those responsible for
analysis in the area of fundamental rights not easy. deploying high-risk AI systems of the importance of conducting
and making available the FRIA.
The approach to the analysis from the perspective of a Data Protection - The need to review and adapt the assessment as the relevant
Officer (DPO) highlights the priority of safeguarding a right (that of personal context changes.
data protection) that is regulated in detail compared to other fundamental - The need to coordinate the FRIA with the DPIA in performing
rights, as well as the need to address it together with other closely related them.
rights in order to fulfil the obligations deriving from the entire legal
framework.
38
1. The creation and implementation of internal methodologies for the
Use case 2: A tool for managing human development and implementation of artificial intelligence systems,
resources that include the 144 controls established by the Spanish Data
protection Agency (AEPD) in its guide of “Audits of data processing
activities that include artificial intelligence”, which allows AI systems
developed internally to comply, by default and by design, with a wide
1. The context range of controls such as their inventory, their relation to the data
This use case is framed within the people selection process that the entity processing they serve, the evaluation of their need and
(a private company) has implemented and, therefore, it is guided by the proportionality, the assessment of the quality of the data (including,
principles that the entity has equipped itself with, and it is carried out by the but not limited to, the analysis and mitigation of possible biases),
its Human Resources department directly or through suppliers. their explainability, transparency and robustness under both the
Artificial Intelligence Act and the General Data Protection
In particular, this entity has defined different levers in its people management
Regulation, as well as measures in the field of validation and
master plan, in order to: i) Promote an exciting team culture, committed to
verification of the quality of the system.
the new project, collaborative and agile, while promoting close, motivating,
2. The analysis of these use cases prior to their implementation within
non-hierarchical leadership, with transformative capabilities; ii) Promote new
the framework of the Data Protection Impact Assessment, with the
ways of working, with respect for diversity, equal opportunities, inclusion and
extension of its purpose, by the legal teams of Innovation and
non-discrimination, and incorporating sustainability in Human Resources
Privacy Law and Labour Law, the IT/systems’ team (CDO – Chief
processes; iii) Transform the management of the people development
data officer - where the responsible AI team is included from a
model: more proactive in the training of teams, with a focus on critical skills;
technical point of view) and the information security team (CISO –
iv) Develop a unique and differential value proposition for the employee; and
Chief Information Security Officer), which allows a second check on
v) Evolve towards a data-driven culture of the people function, through the
the quality of these systems.
optimisation of the data structure and the application of artificial intelligence
3. The evaluation and sanctioning of these initiatives, where
and new technologies to facilitate the analysis of information and make data-
appropriate, by the relevant corporate committees.
based decisions in relation to people.
Within the framework of this last objective, the use case presented below
was proposed, analysed and, finally, implemented. Before going into detail, 2. The project
it is worth noting that the entity is a mature organisation in terms of data
protection and information security compliance schemes, and advanced in Development and application of an artificial intelligence system (4 machine
relation to artificial intelligence and its governance, to the extent that it had learning models) based on the entity's previous experience in the area of
adopted and implemented the following measures, among others: personnel selection to fill certain vacancies. In particular, the system
performs a very specific and limited task: the prediction of an additional
information for each employee, consisting of the probability of his/her
39
suitability for a vacancy, based on data on the employment relationship as
well as information on the characteristics of the centre of destination.
Thus, the result generated by the system is integrated as an additional piece
of information in the personnel selection process, which is in any case
managed and led by specialised human resources staff, who can use this
information, together with the rest of the available information and in
accordance with the company’s internal processes, to perform their
selection functions.
In this sense, and for the avoidance of doubt, in the event of a specific
vacancy, the system will allow the aforementioned human resources staff to
visualise the employees of the company ordered according to the probability
of suitability for the vacancy for which they may or may not have applied. In
any case, the decision to use this additional information will be made by the
specialised human resources staff. The purpose of the system is therefore
to support and improve the efficiency of the personnel selection process by
providing the human resources department staff with systematised
information that would otherwise have to be collected and structured
manually.
Under no circumstances will the system make a decision.
40
3. The FRIA
41
Identification of potential right sholders: who are
the individuals or groups likely to be affected by Staff of the organisation.
the AI system, including vulnerable individuals or
groups?
The human resources department, the legal department in general
(including the labour law area), the DPO, the systems department
(CDO), the information security department (CISO). The human
Identification of duty bearers: who is involved in
resources department has developed and used the tool. The rest are
the design, development and deployment of the AI
evaluation teams that have accompanied the development and
system? What is their role?
implementation of the system and, where appropriate, have
established and implemented the necessary controls not limited to AI
development.
☒ Gender equality
Section B
The regulations on the protection of personal data, the regulations on
Fundamental rights What international/regional legal instruments for labour relations (e.g. the Workers' Statute and the "Practical guide and
context the protection of human/fundamental rights have tool on the corporations' obligation to provide information on the use of
been implemented at the operational level? algorithms in the workplace" issued by the Ministry of Labour) and the
AI Act.
What are the most relevant fundamental rights Data Protection Authorities
courts or bodies in the context of use? Ministry of Labour
42
What are the most relevant human/fundamental Acquis communautaire both in the area of the fundamental right to data
rights decisions and provisions? protection and in the area of the right to equality and non-discrimination.
43
In addition, the relevant information was shared with the workers'
representatives.
Which stakeholders should be involved in addition Evaluation teams established by the company (DPO, CDO and CISO),
to the individuals or groups potentially affected by as well as the Legal department, including the labour law area.
the AI system (e.g. civil society and international
organisations, experts, industry associations,
journalists)?
Section D
Are there other duty bearers that should be
Stakeholder involved in addition to AI providers and deployers No
engagement and (e.g. national authorities, government agencies)?
due diligence
44
Has the provider promoted fundamental rights N/A
standards or audits to ensure respect for
fundamental rights among suppliers?
Have the AI provider and AI deployer publicly The organisation has communicated the use of the AI system, as well
communicated the potential impact of the AI as its purpose, logic and consequences, to its personnel and workers'
system on fundamental rights? representatives in accordance with the provisions of both data protection
and labour law regulations.
45
Risk matrices
Tab. 1 Probability
Tab. 2 Exposure
Low Few or very few of the identified population of rights holders are potentially affected
46
Tab. 3 Likelihood
Probability
Low Medium High Very high
Low L L/M L/H L/VH
Exposure
Likelihood
Low Medium High Very high
Affected individuals and groups may encounter only minor damages or inconveniences in the exercise of their rights and
Low
freedoms.
Medium Affected individuals and groups may encounter significant damages or inconveniences.
High Affected individuals and groups may face serious damages or inconveniences.
Very high Affected individuals and groups may encounter serious or even irreversible damages or inconveniences.
47
Tab. 5 Effort to overcome harm and reverse adverse effects
Low The damages suffered can be overcome without problems (e.g. time spent modifying information, discomfort, irritation, etc.)
The damages suffered can be overcome despite some difficulties (e.g. additional costs, fear, misunderstanding, stress, small
Medium
physical ailments, etc.)
The damages suffered can be overcome although with serious difficulties (e.g. economic loss, material damage, deterioration of
High
health, etc.)
Very high The damages suffered may not be overcome (e.g. long-term psychological or physical ailments, death, etc.).
Tab. 6 Severity
Gravity
Low Medium High Very high
Severity
Low Medium High Very high
48
Tab. 1A Data collection and risk analysis
49
include and affect contained in the are the subject management
gender equality. training data matter experts process itself or
and, in any case, subsequently by
take the lead in human
the selection of resources staff,
personnel. The it is rated as
system does not high due to the
make decisions. possibility of it
being detected
after the
vacancy has
been filled.
Workers' right to The system is [Low] The Legal [Very high] [Medium] [Low] [Low] [Low]
information and included in the Department in The impact The potential Mitigating this
consultation framework of the general, including potentially damage consists potential
management of the the labor law area, affects in the lack of damage can be
Entity's personnel has been involved everyone to mandatory easily achieved
selection, so a breach in both the creation whom the information to by addressing
of labour regulations, and use of the algorithm is workers’ the lack of
in particular, in relation system. The applied. representatives. information.
to the obligations to procedures in
inform workers or their place ensure that
representatives, could information
affect this right. obligations towards
employees and
their
representatives are
met.
50
Tab. 7 Overall Risk Impact
Severity
Low Medium High Very high
Low
Likelihood Medium
High
Very high
51
Tab. 2A Risk management (I)
Rights/freedoms
Likelihood Severity Overall impact Impact prevention/mitigation measures
affected
N/A. Measures already taken in the development
Data protection [Medium] [Low] [Low] of the AI algorithm and before its use (e.g. carrying
out the DPIA, providing information, etc.).
Non-discrimination Train human resource staff to avoid over-reliance
[Low] [Medium] [Low]
and gender equality on the output of the system.
N/A. Measures already taken, both in the
development of the algorithm and before its use.
Workers' right to
Mandatory information has been provided to both
information and [Medium] [Low] [Low]
the company’s staff and the workers’
consultation
representatives in accordance with the models
established by the Ministry of Labour.
52
4. Comments
As explained above, the company is equipped with structures, procedures
and controls, by default and by design, that focus on several of the issues
covered by the FRIA. This has enabled the FRIA of the use case to result in
low levels of impact risks. These structures, procedures and controls allow
the system to be developed in a controlled framework, which leads the data
scientist to include, by default, certain measures in the development and
implementation of the AI system that mitigate the risks identified by the
company from the outset.
In addition, the involvement of the various evaluation teams and their
support in the development of the system will allow risks to be identified and
mitigated at the time of development/implementation, where no risks or
measures have been identified initially.
53
Use case 3: An AI-powered medical 2. The project
imaging tool for cancer detection The project is divided into two phases. The first phase consists of the
development of an AI system based on medical images, which is trained with
data from 5,000 patients from ten countries in Europe. The training dataset
is therefore a multi-centre dataset.
1. The context In addition, a second phase of the project will involve the validation of the AI
In Europe, significant progress is being made in the development of AI tools system in eight healthcare centres around the world outside Europe. The
using cancer images. For example, in 2012 a research team at Universiteit aim of this second phase is to test the AI system in one health centre in Asia,
Maastricht proposed the concept of ‘radiomics’, which refers to the method one in Africa and one in South America.
of extracting a large number of features from medical images using data
characterisation algorithms. The increasing development of AI systems
aimed at using medical images to treat cancer can be illustrated by looking
at the number of publications on ‘AI radiomics’ on the PubMed portal (42
results in 2019, 99 results in 2020, 165 results in 2021, 235 results in 2022,
309 results in 2023 and 338 results in 2024). Therefore, there is a type of
artificial intelligence system that will become increasingly common not only
in the academic world, but also in the world of healthcare.
In addition, there are common types of cancer in the world where patients
receive a high degree of overtreatment and a high degree of considerable
avoidable effects.
Consequently, the use of AI systems to analyse patients' medical images
would provide healthcare professionals with a support tool for predicting the
response to therapy and, consequently, adjusting the therapy to be as
efficient as possible, i.e. to achieve the target goal with the minimum
treatment of the patient. Moreover, these AI systems would provide both
healthcare professionals and patients with a prediction of the patient's
evolution over the coming years.
54
3. The FRIA
55
Identification of potential rights holders: who are the People between 18 and 85 years of age.
individuals or groups likely to be affected by the AI
As all the people involved are affected by cancer, they should be
system, including vulnerable individuals or groups?
considered vulnerable because of their health conditions and the
relationship between these conditions and the purpose of the AI system.
Identification of duty bearers: who is involved in the Hospitals and research centres are involved in the design, the latter only
design, development and deployment of the AI in the design of the AI system and the former also in the related health
system? What is their role? treatment.
What international/regional legal instruments for the Universal Declaration of Human Rights, EU Charter of Fundamental
Section B protection of human/fundamental rights have been Rights, applicable data protection regulations.
Fundamental rights implemented at the operational level?
context
Data protection supervisory authorities in the country/region where AI
What are the most relevant fundamental rights systems are developed and used, as well as courts, the Court of Justice
courts or bodies in the context of use? of the European Union and the European Court of Human Rights.
56
What policies and procedures are in place to assess A specific ethics committee will be established for the project.
the potential impact on fundamental rights,
including stakeholder participation?
Who are the main groups or communities potentially Patients with cancer X [anonymised].
affected by the AI system, including its
Section C development?
Controls in place
Which stakeholders should be involved in addition
to the individuals or groups potentially affected by Cancer patient associations.
the AI system (e.g. civil society and international
organisations, experts, industry associations,
journalists)?
Are there other duty bearers that should be involved Data protection supervisory authority, local health department, scientific
in addition to AI providers and deployers (e.g. research ethics committee, AI supervisory authority.
national authorities, government agencies)?
57
Has the AI provider carried out an assessment of its
supply chain to determine whether the activities of
suppliers/contractors involved in product/service
development may affect fundamental rights?
N/A
Has the provider promoted fundamental rights
standards or audits to ensure respect for
fundamental rights among suppliers?
58
Are there other duty bearers that should be involved Data protection supervisory authority, local health department, scientific
in addition to AI providers and deployers (e.g. research ethics committee, AI supervisory authority.
national authorities, government agencies)?
Risk matrices
Tab. 1 Probability
59
Tab. 2 Exposure
Low Few or very few of the identified population of rights holders are potentially affected
Tab. 3 Likelihood
Probability
Low Medium High Very high
Low L L/M L/H L/VH
Exposure
Likelihood
Low Medium High Very high
60
Tab. 4 Gravity of the prejudice
Low Affected individuals and groups may encounter only minor prejudices in the exercise of their rights and freedoms.
Very high Affected individuals and groups may encounter serious or even irreversible prejudices.
Low Suffered prejudice can be overcome without any problem (e.g. time spent amending information, annoyances, irritations, etc.)
Suffered prejudice can be overcome despite a few difficulties (e.g. extra costs, fear, lack of understanding, stress, minor physical
Medium
ailments, etc.).
Suffered prejudice can be overcome albeit with serious difficulties (e.g. economic loss, property damage, worsening of health,
High
etc.).
Very high Suffered prejudice may not be overcome (e.g. long-term psychological or physical ailments, death, etc.).
61
Tab. 6 Severity
Gravity
Low Medium High Very high
Severity
Low Medium High Very high
62
with the applicable privacy of information
personal data individuals. deleted.
protection regulations
may affect this right.
The algorithm was [High] [Very high] [Very high] [Very high] [High] [Very high]
trained on data from
Ethnicity may All persons in Negative impact It would be
European health
cause some the relevant on equal access necessary to
centres, so it is
differences in group (ethnic to healthcare and adapt or even
Non-discrimination possible that
medical imaging, group) to on the quality of retrain the
discrimination may
that may affect whom the cancer treatment algorithm with
occur when it is used
diagnostic algorithm received. data that avoids
in the three non-EU
accuracy. applies. discrimination.
health centres.
Incorrect functioning of [Medium] when [Very high] [High] when [Very high] [Medium] [High] when
the algorithm may used in European The impact used in subsequent
Incorrect Pathologies
result in ineffective and patients. potentially European cancer
functioning of the where
harmful medical affects all patients. screening
algorithm may subsequent
treatment for the individuals to can correct
result in follow-up can
patient, resulting in a whom the the system
ineffective and correct the
prejudice to the right to algorithm is error
harmful health system error.
health. applied.
Right to physical treatment for the
and mental health patient.
[Very high]
[High]
[Very high] when
[High] when used in Pathologies subsequent
when used
non-European where cancer
in non-
patients. subsequent control
European
follow-up cannot cannot
patients.
correct the correct the
system error. system error.
63
Tab. 7 Overall risk impact
Severity
Low Medium High Very high
Low
Likelihood Medium
High
Very high
64
Tab. 2A Risk management (I)
65
Tab. 3A Risk management (II)
66
4. Comments requiring resources disproportionate to its purpose, i.e. to have an ex ante
analysis that facilitates the design of the artificial intelligence system.
The main difficulties encountered in the use of the FRIA methodology in this
use case are described below. The final difficulty faced was identifying the people who should be involved
in carrying out the impact assessment, both in terms of their expertise and
This methodology requires, at a minimum, identifying the fundamental rights their role in the development and implementation of this type of artificial
and freedoms that will be impacted by the artificial intelligence system. This intelligence (e.g. identifying which people with expertise in developing AI
therefore requires expert knowledge of the essential content of each of the systems for healthcare purposes should be involved without risk of
fundamental rights and freedoms under scrutiny. It can be assumed that breaching of confidentiality).
Data Protection Officers (DPO) have such knowledge in relation to the
fundamental right to the protection of personal data and the fundamental
right to personal and family privacy. However, this expert knowledge is not
necessarily required of a DPO in relation to the other fundamental rights and
freedoms. Consequently, the first difficulty is that a thorough knowledge of
each of the fundamental rights and freedoms is required in order to identify
which ones will be affected. Once the fundamental rights and freedoms
affected have been identified, the DPO can seek advice from experts in
these rights and freedoms.
The above difficulty becomes more complex when it is envisaged to use the
artificial intelligence system outside the European Union. While it is possible
to define a common framework with regard to the content of fundamental
rights and freedoms within the European Union, taking into account the EU
Charter of Fundamental Rights and the rulings of the CJEU, without going
into the details of the differences established by national courts, it is hardly
possible to speak of a common framework when examining the content of
fundamental rights and freedoms worldwide. Applying this methodology in
the EU and outside the EU with the same level of detail would require a
comparative law analysis that the vast majority of organisations could hardly
undertake given the human, time and financial resources it would require.
Hence, it would be useful to have a guidance on the minimum content of
each fundamental right and freedom at the global level, or by region or legal
tradition, which would allow the use of the present methodology without
67
Use case 4: ATENEA: AI at the service of 2. The project
the elderly ATENEA is a project based on the development of a generative AI system
(neural networks), voice assistant, voice biometrics and robotic process
automation, combined with mature technologies such as data analytics,
cloud computing and smartphones. Using biometric voice recognition, it can
1. The context respond to the needs of elderly users in different use cases: call and video
Public administrations in general are immersed in a process of digital call to a family member, emergency calls (112), call to the call to the
transformation, with the idea of reforming public services by taking municipality and/or automatic booking of an appointment with the social
advantage of the benefits provided by the exponential evolution of services, booking of an appointment with the Primary Care Centre, diary
technologies. However, these digital transformation policies must be reminders, transport route information and, in the future, on-line shopping,
formulated and implemented with a positive impact in terms of social banking and supply management.
inclusion, combining the promotion of digitalisation with social policies to The ATENEA solution does not require any digital skills or physical
minimise, as far as possible, the digital divide that inevitably arises in such interaction from the user, the identification is biometric in order to guarantee
disruptive processes of change. It is therefore crucial to put technology at exclusive individual use and security. ATENEA is an artificial intelligence in
the service of people, with the aim of improving relations with citizens and the form of a tablet, without buttons, without touch screens, it works only with
social care, tackling the inequalities caused by increasing digitalisation, an interaction as simple as voice. It provides an elderly person, probably in
guaranteeing equal opportunities and, in general, improving the living a situation of dependency, with agile responses to their basic needs. This
conditions of citizens. artificial intelligence makes it possible to carry out everyday tasks (e.g.
It is in this context and under these conditions and social commitments that checking your bank statement, making a medical appointment, having a
the ATENEA project has been promoted. The project (now in its pilot phase) video conference with a family member, calling emergencies or social
aims to contribute to the digital transformation of the territories, putting the services) through a conversation.
most vulnerable citizens at the centre, with the specific objectives of ATENEA has been designed with the co-creation of elderly people,
reducing the existing digital divide and unwanted loneliness, increasing the professional carers and family carers from the user's environment, building
safety of people, especially at home, promoting inclusion, well-being, health a bond of trust and support in case of need.
and, ultimately, the quality of life of people. The project specifically targets
The initiative is led by a strategic alliance between technological and social
people over 65 who live alone and who suffer most from the vulnerabilities
entities that are responsible for the design of the solution, contact with users
caused by the digital divide.
and the deployment of socio-digital integrators in the field, cloud services,
provision of devices (tablets), speech recognition technology, robotic
automation of processes, communication services, security, evaluation of
results and impact, and guaranteeing users’ rights.
68
The project is a public-private partnership. The current phase of the project
is funded by the Department of Social Rights of the Government of Catalonia
as part of the EU-funded Next Generation EU Recovery, Transformation and
Resilience Plan. This experience is supported by various municipalities and
public administrations, which are using their territory as a pilot for testing the
solution and are responsible for identifying potential users.
69
3. The FRIA
70
or processing massive amounts of data. In short, ATENEA makes it
possible to carry out everyday tasks with a conversation with the AI.
· Device configuration
· Language of dialogue (Catalan or Castilian)
What types of data are processed (personal, non-
personal, special categories)? · Access information for family doctor appointments
· Municipal social services/OAC appointment telephone number
· Emergency and tele-assistance telephone numbers
· Telephone number and family relationship
· Telephone number of assigned socio-technological operators
· Information about reminders and personalised alerts
Over 65 users with a sufficient cognitive level to interact with the device.
Identification of potential rights holders: who are the
individuals or groups likely to be affected by the AI As all the people involved are affected by the digital divide and loneliness
system, including vulnerable individuals or groups? at home, they should be considered vulnerable, due to their socio-
demographic conditions and the relationships between these conditions
and the purpose of the AI system.
The initiative is led by a strategic alliance between technological and
Identification of duty bearers: who is involved in the social entities responsible for the design of the solution, contact with
design, development and deployment of the AI users and the deployment of socio-digital integrators in the field, cloud
system? What is their role? services, provision of devices (tablets), speech recognition technology,
robotic automation of processes, communication services, security,
71
evaluation of results and impact, and guaranteeing users’ rights. It is a
public-private partnership. The current phase of the project is funded by
the Department of Social Rights of the Government of Catalonia as part
of the EU-funded Next Generation EU Recovery, Transformation and
Resilience Plan. This experience has the support of various
municipalities and public administrations, which have made their territory
a pilot for testing the solution and are responsible for identifying potential
users.
What fundamental rights are potentially affected by ☒ The right to data protection
the use of the AI system?
☒ Freedom from discrimination
Section B
Fundamental rights The project started before the publication of the AI Act, which will also
context apply to it. At that time, the only applicable regulations were the GDPR
What international/regional legal instruments for the
and the Spanish Organic Law 3/2018 of 5 December on the protection of
protection of human/fundamental rights have been
personal data and guarantee of digital rights (LOPD-GDD) and
implemented at the operational level?
complementary regulations.
72
· Constitutional Court
· Supreme Court
· National High Court
· Ordinary courts and tribunals
· Ombudsman and similar institutions in the regions
· Autonomous regional agencies for the defence of rights
· General State Prosecutor's Office
· Data protection supervisory authorities in the Country/Region
Constitutional Court:
· Judgment 135/2024: This decision dealt with the violation of the right
to effective judicial protection in a case of the application of legal
provisions that had been declared unconstitutional.
73
· Organic Law 1/2004, of 28 December, which deals with the prevention,
protection and punishment of violence against women.
What policies and procedures are in place to assess ATENEA is a project based on the voluntary collaboration of the users to
the potential impact on fundamental rights, improve their wellbeing and quality of life, and users have been involved
including stakeholder participation? in the development of the project's features since its inception.
In order to participate in the project, users are asked to give an initial
informed consent, exercising their self-determination on the basis of prior
verbal and written information. This initial informed consent is then
validated by the users, who are again informed of the risks and benefits
Section C
associated with using the system.
Controls in place
The project has carried out a personal data protection impact
assessment (DPIA) and is compliant with current regulations on security
Has an impact assessment been conducted,
and personal data protection. This is part of the cooperation agreements
developed and implemented in relation to specific
with the pilot areas.
issues (e.g. data protection) or certain features of
the system (e.g. use of biometrics)?
Who are the main groups or communities potentially Persons aged 65 and over living alone.
affected by the AI system, including its
development?
Section D
Stakeholder Which stakeholders should be involved in addition Public administrations, private companies and third sector entities.
engagement and to the individuals or groups potentially affected by
due diligence the AI system (e.g. civil society and international
organisations, experts, industry associations,
journalists)?
74
Data protection authority, local and regional public administrations,
Are there other duty bearers that should be involved scientific research ethics committee of a public university, AI supervisory
in addition to the AI provider and deployers (e.g. authority.
national authorities, government agencies)?
The evaluation process has involved the project partners, which are a
group of companies not constituted as an autonomous legal entity.
From the outset, all the parties involved in the project (a public-private
partnership) have been very aware of the need to ensure compliance
Have business partners, including service providers with existing regulations and to protect the fundamental rights of citizens
(e.g. subcontractors for AI systems and datasets), potentially affected by the implementation of this project. A specific
been involved in the assessment process? accredited ethics committee has been identified for the project (the
bioethics and law committee of a public university in Catalonia), which is
responsible for the ethical evaluation of the project.
On the other hand, the agreements signed with the participating public
administrations provide for the obligation to carry out impact
assessments, continuous monitoring, training, transparency and
accountability of this project.
Has the AI provider carried out an assessment of its
supply chain to determine whether the activities of
suppliers/contractors involved in product/service
No (it is not a legal requirement)
development may affect fundamental rights?
Has the provider promoted fundamental rights
standards or audits to ensure respect for
fundamental rights among suppliers?
Have the AI provider and deployer publicly No (it is not a legal requirement)
communicated the potential impact of the AI system
on fundamental rights?
75
Have the AI provider and AI deployers provided Only in relation to data protection
training on fundamental rights standards to
management and procurement staff dealing with the
AI system?
Risk matrices
Tab. 1 Probability
Low The risk of harm is improbable or highly improbable.
Tab. 2 Exposure
Low Few or very few of the identified population of rights holders are potentially affected
76
Tab. 3 Likelihood
Probability
Low Med High Very high
Low L L/M L/H L/VH
Exposure
Likelihood
Low Medium High Very high
Low Affected individuals and groups may encounter only minor prejudices in the exercise of their rights and freedoms.
Very High Affected individuals and groups may encounter serious or even irreversible prejudice.
77
Tab. 5 Effort to overcome the prejudice and to reverse adverse effects
Low The harm suffered can be overcome without problems (e.g. time spent on changing information, inconvenience, irritation, etc.).
The harm suffered can be overcome despite some difficulties (e.g. additional costs, fear, misunderstanding, stress, minor physical
Medium
ailments, etc.).
The harm suffered can be overcome although with serious difficulties (e.g. financial loss, material damage, deterioration of health,
High
etc.).
Very High The harm suffered may not be overcome (e.g. long-term psychological or physical ailments, death, etc.).
Tab. 6 Severity
Gravity
Low Med High Very high
Low L L/M L/H L/VH
Medium M/L M M/H M/VH
Effort
Severity
Low Medium High Very high
78
Tab. 1A Data collection and risk analysis
79
conversation is in Spain (ENS, special
processed and stored UNE standards...) categories of
to train the algorithm following the report data related to
used. prepared by the the vulnerable
Cybersecurity situation of
All the user’s
Agency of users, which
conversations related
Catalonia. may be invasive
to a specific service
and affect the
accessed via ATENEA,
privacy of
e.g. emergency calls
individuals.
or other calls, are kept
private.
Voice biometrics is
only used to identify
the person and is an
option that is activated
based on the user's
own decision.
Identification can be
based on voice
biometrics or on the
traditional combination
of username and
password.
Any processing
operation that does not
comply with the
applicable regulations
on personal data
protection may affect
this right.
80
Non-discrimination The AI mechanism [High] [Medium] [Medium] [Very high] [High] [Very High]
uses speech
Possible voice and There will be Negative impact It would be
recognition technology.
speech problems of a limited on equal access necessary to
Discrimination may
users can lead to number of to the support adapt the voice
occur if the voice
their exclusion. people with provided by the assistant to
assistant does not
speech or system and on integrate cases
understand the
language quality of of speech or
speaker due to a
problems. service. language
communication issues
problems.
(e.g. speech
impediment or Users must
impairment). have a certain
cognitive level
Discrimination may
and it is also
also occur when users
possible to
do not speak the
exclude people
language of the
with speech or
system correctly
language
(CAT/ES).
problems from
the project.
Right to health Inadequate functioning [Low] [Very high] [Low] [High] [Medium] [Medium]
of the system may
The project is The impact Not being able This system is
result in a failure to
subject to potentially to access health not the only
adequately guarantee
continuous affects all services when solution that
this right.
evaluation and individuals to needed can people can use
specific monitoring whom the cause serious in case of need,
to detect possible algorithm is harm to users, as there are
malfunctions. applied. who are other channels
potentially more of access to
Only in some cases
exposed to risky health systems
can malfunctioning
situations (as well as to
significantly affect
other services),
81
the right to health taking into
(e.g. emergency account that
call). potential users
must have a
certain cognitive
level.
Right to social Inadequate functioning [Low] [Very high] [Low] [High] [Low] [Medium]
assistance of the system may
The project is The impact Not being able This system is
result in a failure to
subject to potentially to access social not the only
adequately guarantee
continuous affects all assistance solution that
this right.
evaluations and individuals to services when people can use
specific monitoring whom the needed can in case of need,
to detect possible algorithm is cause serious as there are
malfunctions. applied. harm to users, other channels
who are of access to
Only in certain
potentially more social
cases of use can
exposed to risk assistance
improper operation
situations. services, taking
affect this right.
into account that
potential users
must have a
certain cognitive
level. In
addition, these
are not
emergency
services and
users have a
contact person
from social
82
services who
monitors their
situation.
Right of access to This right could be [Medium] [Medium] [Medium] [Medium] [Medium] [Medium]
services of general undermined if not all
The project is in a The impact is Lack of access It would be a
economic interest people can have
pilot phase with potentially on to this system, matter of
access to this system
public funding from all people whether due to providing
due to a lack of
the Next who are not lack of sufficient
resources of public
Generation funds. yet information or resources so
administrations.
beneficiaries lack of that all those
of this project resources, can who are likely to
due to lack of undermine have access to
resources or equal this system can
information. opportunities do so.
and social
cohesion in the
area.
83
Tab. 7 Overall risk impact table
Severity
Low Medium High Very High
Low
Likelihood Medium
High
Very High
84
· Voice biometric identification as an option based
on prior consent. This identification will not be
used for any other purpose, including
identification shared with third parties.
85
· Ensure the quality of training, validation and test
data through appropriate data governance and
management practices.
Health care [Low] [Medium] [Medium] · Ensure adequate technical assistance and
improve the technical quality of the system.
Social assistance [Low] [Medium] [Medium] · Ensure adequate technical assistance and
improve the technical quality of the system
Access to services of [Medium] [Medium] [Medium] · Ensure sufficient resources and means so that
general economic interest all potential users have access to this service.
86
Tab. 3A Risk management (II)
Data Protection/Privacy [Low] The probability of an impact on [Medium] The severity of the [Medium]
this right (probability of adverse impact has been reduced to
outcomes) has been reduced to Low Medium, as the measures
because the measures adopted have adopted have limited the scope of
limited the possibility of negative the negative consequences that
consequences. could occur. The severity level
remains Medium, but is lower
than the initial level.
No discrimination [Low] The probability has been reduced [Medium] The severity level has [Medium]
to Low, because the probability of been reduced from Very High to
adverse effects has been reduced to Medium due to the
Medium, due to the decrease in the implementation of
number of possible malfunctions, and complementary/alternative
the exposure has been reduced to Low, measures to access services
due to the decrease in the number of (reducing the gravity of the impact
people affected (some limitations of use from Very High to High) and the
related to language skills have been reduced effort required to react
identified). due to the measures already in
place (reducing the effort from
High to Medium).
Health care [Low] [Medium] The severity level has [Medium]
decreased, due to
complementary/alternative
measures to access services
(reduction in gravity from Very
High to High) and the reduced
effort required to react due to the
measures already in place
87
(reduction in effort from Medium
to Low). The severity remains
Medium, but is lower than the
initial level.
Social assistance [Low] [Medium] The severity has [Medium]
decreased, due to
complementary/alternative
measures to access services
(reduction in gravity of impact
from Very High to High) and the
reduced effort required to react
due to the measures already in
place (reduction in effort from
Medium to Low). The severity
remains Medium, but is lower
than the initial level.
Access to services of general [Medium] [Medium] The level of severity is [Medium]
economic interest maintained because guaranteeing
this right depends on the
resources that governments
allocate to it.
88
4. Comments
The emergence of new technologies is not only causing a revolution in the
way we work and deliver public services, but also the need to rethink, from
the design stage, the impact of these applications on citizens’ fundamental
rights. In fact, this paradigm shift was already evident with the entry into
force of the General Data Protection Regulation (GDPR) and is now gaining
momentum.
A methodology for carrying out a fundamental rights impact assessment
(FRIA) is an essential tool for identifying the risks and the organisational and
technical measures to be applied, with a transversal approach that must be
integrated into the multidisciplinary work teams responsible for implementing
AI in public administrations. To this end, as already observed with the
application of the GDPR, it is absolutely necessary that data protection
officers are involved, from the outset, in the digital transformation strategies
and projects to be implemented in each organization. In this way, these risks
can be analysed from the outset and by default, and the necessary
organisational and technical measures taken.
On the other hand, it is also advisable to review job descriptions to include
these types of skills and responsibilities, to invest in training and to develop
appropriate professional profiles. Furthermore, at this time of transition to
‘smart administration’ (based on the use of AI), some kind of practical guide
should be developed as an internal instruction, so that everyone in the
organisations is aware of these risks, including guidelines on best practice
in the use of AI in administrative activity.
89
90