AI Guidelines e - 1
AI Guidelines e - 1
in parliaments
D
December 2024
Table of contents
Forewords 3
Introduction 6
About the Guidelines 9
Key concepts 11
Strategy 29
1
Guidelines for AI in parliaments
Appendices 171
2
Forewords
I am pleased to present these Guidelines for AI in parliaments, which arrive at a
crucial moment in our democratic journey. We stand at the threshold of a
transformation that is reshaping how parliaments operate and serve their citizens.
Artificial Intelligence presents both extraordinary opportunities and significant
challenges for our institutions of democracy.
These Guidelines emerge from our recognition that parliaments must take a leading
role in governing the use of AI, not only through legislation and oversight but also
through their own adoption and implementation of these technologies. The
Guidelines represent a collaborative effort, drawing on the expertise and experience
of parliamentary staff and technology specialists from across our global community.
These Guidelines are valuable for different audiences inside and outside of
parliament, including for members, especially those serving on modernization
committees, technology committees or committees of the future, as well as senior
managers and technical experts. They offer useful insights for parliamentarians as
they grapple with AI oversight and regulation in society, showing how AI can be
deployed responsibly within their own institutions. They provide a framework for
making informed decisions about AI adoption while ensuring robust democratic
oversight.
3
Guidelines for AI in parliaments
Working together, we can ensure that AI serves to strengthen rather than diminish
our democratic institutions, upholding the fundamental values that our parliaments
represent.
Martin Chungong
Secretary General
Inter-Parliamentary Union
4
Guidelines for AI in parliaments
Since the Brazilian Chamber of Deputies’ first experience with artificial intelligence
(AI) in 2013, we have been on a journey of continuous learning about this technology
and the extraordinary capabilities that it can offer to parliaments. Despite this, we are
still surprised by the exponential speed of advances in AI, and by its ubiquitous
nature in the daily lives of public organizations around the world.
Among the lessons learned over the years of using AI in the legislative branch, it is
worth highlighting the need for coordination encompassing multiple stakeholders, so
that the technology’s use can be well planned and managed. This observation
inspired us to hold the Parliamentary Data Science Hub meeting in Brasília (in April
2024) to discuss good practices for the use and development of artificial intelligence
in parliaments.
I am proud to note that the excellent work of the experts who participated in that
meeting has resulted in a set of guidelines with a user-friendly approach and the
flexibility needed to meet the different realities faced by parliaments. The guidelines
combine strategic actions and policies with examples of everyday practices, in a way
that recognizes the plurality of decisions involved in the use, development and
outsourcing of AI systems.
Coordinator of the Parliamentary Data Science Hub in the IPU’s Centre for
Innovation in Parliament
5
Introduction
Artificial intelligence (AI) presents significant opportunities for parliaments to
enhance their operations and to become more efficient and effective, enabling them
to better serve citizens. However, adopting AI introduces new challenges and
presents risks that must be carefully managed.
These Guidelines for AI in parliaments (the “Guidelines”) have been developed for
parliaments by parliamentary staff and the Inter-Parliamentary Union’s (IPU) Centre
for Innovation in Parliament (CIP). They provide comprehensive guidance to support
parliaments on their journey towards understanding and implementing AI responsibly
and effectively. By adopting a well-thought-through, strategic approach to AI,
parliaments can harness the technology’s full potential to drive innovation and
efficiency in the legislative process.
The Guidelines cover key areas, including the potential role of AI in parliaments,
related risks and challenges, suggested governance structures and AI strategy,
ethical principles and risk management, training and capacity-building, and how to
manage a portfolio of AI projects across parliament. They are complemented by a
set of use cases, shared by parliaments, that describe how AI can support specific
parliamentary actions.
Audience
The Guidelines have been written to support a range of parliamentary roles:
• For members of parliament (MPs), the Guidelines offer insights into the
potential impact of AI on legislative processes, constituent engagement and
parliamentary oversight. They provide a clear overview of AI capabilities and
limitations, helping MPs to make informed decisions about AI adoption and
regulation in parliament.
6
Guidelines for AI in parliaments
The Guidelines are designed to support parliaments of all sizes and levels of digital
maturity – from large, well-resourced legislatures with advanced digital
infrastructures, to smaller parliaments just beginning their digital transformation
journey.
The Guidelines can be tailored to allow parliaments to focus on areas most relevant
to their current needs and capabilities, and individual parliaments can adapt them to
suit their unique circumstances, culture and resources. While digitally mature
parliaments may be ready to implement more advanced AI applications, those at
earlier stages can use the Guidelines to build foundational governance structures
and develop AI literacy.
List of Guidelines
D
Audience
For senior For staff
parliamentary involved in AI
Guideline managers For MPs implementation
Key concepts The role of AI in
parliaments
Risks and challenges
for parliaments
Alignment with national
and international AI
frameworks and
standards
Inter-parliamentary
cooperation for AI
Strategy Strategic actions
towards AI governance
Generic risks and
biases
Ethical principles
Introducing AI
applications
Data and AI literacy
Planning and Project portfolio
implementation management
7
Guidelines for AI in parliaments
Data governance
Security management
Risk management
Systems development
8
Guidelines for AI in parliaments
Acknowledgements
The CIP, the Parliamentary Data Science Hub, the IT Governance Hub and the
editors are grateful for the contributions of the many parliamentary staff who helped
shape, write and review these Guidelines, and for the support provided by the
respective parliaments.
Editors
● Patricia Gomes Rêgo de Almeida, Chamber of Deputies of Brazil
● Ludovic Delépine, European Parliament
● Andy Williamson, Centre for Innovation in Parliament, Inter-Parliamentary
Union
Contributors
D
• Patricia Rêgo de Almeida, Chamber of Deputies of Brazil
• Francisco Edmundo Andrade, Chamber of Deputies of Brazil
• Javier de Andrés Blasco, Chamber of Deputies of Spain
• Álvaro Carmo, National Assembly of Angola
• Virginia Carmona, Chamber of Deputies of Chile
• Giovanni Ciccone, Chamber of Deputies of Italy
• Ludovic Delépine, European Parliament
• Claudia di Andrea, Chamber of Deputies of Italy
• Michael Evraire, House of Commons of Canada
• Marcio Fonseca, Chamber of Deputies of Brazil
• José Andrés Jiménez Martín, Chamber of Deputies of Spain
• Vinicius de Morais, Chamber of Deputies of Brazil
• Rune Mortensen, Parliament of Norway
• Neemias Muachendo, National Assembly of Angola
• Jurgens Pieterse, Parliament of South Africa
• Manuel Pereira González, Senate of Spain
• Peter Reichstädter, Parliament of Austria
• Frode Rein, Parliament of Norway
• Esteban Sanchez, Chamber of Deputies of Chile
• Luciana Silo, Chamber of Deputies of Italy
• Paul Vaillancourt, House of Commons of Canada
9
Guidelines for AI in parliaments
Contact
For more information about this work, please contact [email protected]. We are
always keen to learn how the Guidelines have been used. We welcome all feedback
and suggestions.
The Guidelines for AI in parliaments are published under the Creative Commons Attribution-
NonCommercial-ShareAlike 4.0 International licence. Please see the IPU’s terms of use for more
details.
Suggested attribution: Inter-Parliamentary Union (IPU), Guidelines for AI in parliaments (Geneva: IPU,
2024): www.ipu.org/AIguidelines.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the.
It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial
intelligence, please visit www.ipu.org/AI or contact [email protected].
10
Guidelines for AI in parliaments
Key concepts
D
11
Guidelines for AI in parliaments
The role of AI in
parliaments
Audience
This high-level guideline is intended for parliamentary leadership and senior
parliamentary managers, as well as for parliamentary staff and MPs who are
interested in gaining a broad understanding of where AI can impact upon the work of
parliaments.
The first step in the journey is to understand where opportunities lie for a given
parliament. It is important to recognize that every parliament is different and will want
to seize the opportunities on offer differently, based on its own unique culture,
working practices, resource availability and timing constraints. A framework to help
identify opportunities and support the adoption of AI can be found in the following
guidelines: Project portfolio management and Strategic actions towards AI
governance.
12
Guidelines for AI in parliaments
13
Guidelines for AI in parliaments
14
Guidelines for AI in parliaments
and predictive models to generate detailed and easily understandable reports for
MPs.
Improving transparency
AI systems can play a pivotal role in promoting transparency and accountability
within parliaments. Automated transcription and translation tools can generate
accurate and timely transcripts of parliamentary debates and discussions, making
legislative proceedings more accessible to the public, while AI-powered sentiment
analysis tools can gauge public sentiment towards legislative proposals, enabling
MPs to better understand and address their constituents’ concerns.
Automating transcription of parliamentary debates
AI systems can be used to automatically transcribe parliamentary debates in real
time. These accurate, rapidly produced transcripts can then be made available to the
public and to specific users or departments within parliament, allowing citizens and
key officials to access parliamentary proceedings without having to consult complete
audiovisual recordings. Real-time speech-to-text transcription and translation – a
possibility offered by some language models – could also facilitate effective
communication in the context of large multinational events.
Visualizing legislative data
AI systems can be used to create interactive visualizations of legislative data, such
as an MP’s legislative activity or the progress of a bill through parliament. These
visualizations make it easier for the public to understand and evaluate the work of
parliament.
Accessing legislative information
AI-powered search tools can allow constituents to easily find information about bills,
votes, committees and other aspects of parliamentary work. This promotes
transparency by making legislative information more accessible and understandable
to all.
Analysing economic data
AI systems can be used to analyse economic data related to parliamentary spending
and the financial interests of MPs, helping to identify potential conflicts of interest,
and promoting transparency and accountability by ensuring that MPs are subject to
public scrutiny.
Producing plain-language summaries
AI systems can be used to summarize bills, reports and transcripts in plain language,
making them easier to understand for ordinary citizens. Making such summaries
available can enhance public participation in the legislative process and foster
communication between MPs and their constituents.
15
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
16
Guidelines for AI in parliaments
As AI adoption gains traction and the use of this technology becomes more
commonplace, parliaments must understand the implications and closely examine
the risks and challenges associated with the implementation of AI.
Technologies such as generative AI, with its ability to create content based on vast
amounts of data, promises productivity gains and potentially transformational change
in parliamentary operations. However, it also introduces new complexities and risks
that must be carefully managed.
For a discussion of the more generic risks and biases that AI introduces, refer to the
guideline Generic risks and biases. For a discussion of the potential uses of AI in
parliamentary settings, refer to the guideline The role of AI in parliaments.
Strategic considerations
At the strategic level, parliaments face several key challenges in adopting AI.
Foremost among these is the development of comprehensive AI governance
frameworks that ensure strong ethical principles, transparency and accountability in
AI systems.
Parliaments must also address potential biases in AI algorithms, ensuring that these
systems do not inadvertently amplify existing societal inequalities or underrepresent
17
Guidelines for AI in parliaments
Public trust and perception present another strategic challenge. Parliaments must
effectively communicate their use of AI to constituents, managing expectations and
addressing concerns about the role of AI in democratic processes. This requires a
delicate balance between showcasing the benefits of AI adoption and reassuring the
public that human judgement remains central to parliamentary functions.
Strategic considerations checklist:
● Develop a comprehensive AI governance framework and policies that reflect
parliament’s ethical principles.
● Establish protocols for ensuring AI transparency and accountability.
● Expand the remit of existing data committees or similar bodies to encompass
AI.
● Develop a communication strategy to inform the public about AI use in
parliament.
● Regularly assess and mitigate potential biases in AI systems.
18
Guidelines for AI in parliaments
Operational challenges
On the operational front, implementing and integrating AI systems into existing
parliamentary procedures and processes poses significant challenges, especially
since these are often complex and steeped in tradition. Moreover, parliaments must
ensure that AI adoption does not disrupt the essential human elements of political
discourse and decision-making.
Data management and security are also critical concerns. Parliaments handle
sensitive information and are prime targets for cyberattacks. AI systems could
potentially create new vulnerabilities if they are not implemented with robust security
measures.
There is also the potential for job displacement within parliamentary staff,
necessitating careful management of role redefinition and retraining.
Operational challenges checklist:
● Conduct a thorough assessment of existing parliamentary procedures for AI
integration.
● Implement robust cybersecurity measures for AI systems.
● Develop and implement AI literacy and data literacy programmes for staff and
MPs.
● Create a data quality assurance process for AI training data sets.
● Establish a change management plan to address potential job displacement
and role changes.
Mitigation strategies
Given these challenges, a cautious and measured approach to AI adoption is
advisable for most parliaments. The IPU’s Centre for Innovation in Parliament
recommends a step-by-step, risk-based approach.
Creating safe “lab environments” for AI experimentation is a prudent first step. This
allows parliaments to explore potential use cases, such as producing summaries of
19
Guidelines for AI in parliaments
AI policies and practices should be regularly reviewed and updated to account for
the rapid pace of technological change. Parliaments must remain agile, continuously
assessing the impact of AI on their operations and adjusting their approaches
accordingly.
Mitigation strategies checklist:
● Create a safe environment for AI experimentation.
● Implement a step-by-step, risk-based approach to AI adoption and monitoring.
● Adopt and, where necessary, adapt these Guidelines to support the safe
introduction of AI in parliament according to its organizational culture.
● Develop partnerships with other parliaments and external experts for
knowledge-sharing.
Conclusion
The adoption of AI in parliaments offers significant potential benefits but also
presents unique challenges. By carefully navigating the strategic, operational and
legislative risks, parliaments can harness the power of AI to enhance their
effectiveness while safeguarding the essential human elements of democratic
representation.
The journey of AI adoption in parliaments is just beginning, and the path forward will
require ongoing dialogue, rigorous oversight and a commitment to preserving the
fundamental values of democratic governance.
20
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
21
Guidelines for AI in parliaments
It should be noted that, while these Guidelines serve as an important starting point,
parliaments may need to go above and beyond these in order to address their
unique needs and uphold democratic principles. They should be used in conjunction
with national and international standards to create a comprehensive approach to AI
governance in parliament.
22
Guidelines for AI in parliaments
a code of conduct for the use of generative AI, which emphasizes that the use of this
technology must align with various multilevel strategies and regulations:
[The Code of Conduct] for the Use of Generative Artificial Intelligence Tools has therefore
been adopted, taking into account the Principles for the Use of AI in Support of
Parliamentary Work, as laid down by the Supervisory Committee on Documentation
Activities of the Chamber of Deputies, and having regard to the recommendations set
forth in the 2024–2026 Three-Year Plan for ICT in the Public Administration, the
Hiroshima Process International Code of Conduct for Advanced AI Systems as agreed
by the G7, as well as the Guidelines for Secure AI System Development, promoted at
the international level by the National Cyber Security Centre and signed on 27 November
2023 by the National Cybersecurity Agency.
23
Guidelines for AI in parliaments
Parliaments should regularly check for updates to these frameworks and for new
parliament-specific AI guidance documents as they emerge. The unique role of
parliaments in democratic societies may necessitate the development of more
tailored international guidance in the future.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
24
Guidelines for AI in Parliaments
Inter-parliamentary
cooperation for AI
Audience
This high-level guideline is intended for parliamentary leadership and senior
parliamentary managers, as well as for senior IT staff who are interested in the
adoption and application of AI in their parliament.
25
Guidelines for AI in parliaments
26
Guidelines for AI in parliaments
Future-proofing AI governance
Parliaments should be involved in collaborative foresight and scenario planning
exercises, helping to prepare for potential future developments in AI. This could
include joint research on emerging AI technologies and their implications for
parliamentary work, as well as the development of adaptive governance frameworks
that can evolve alongside AI technology.
● Acting as a central hub for the sharing of knowledge and good practices, and
supporting regional networks
● Providing a neutral platform for discussing challenges and developing
solutions
● Representing parliamentary interests in global AI governance discussions
● Fostering a community of practice among parliaments, and encouraging
ongoing dialogue and collaboration
● Offering expertise and resources to support parliaments at various stages of
AI adoption
By leveraging its network and resources, the CIP plays a pivotal role in ensuring that
parliaments worldwide are well-equipped to harness the benefits of AI while
mitigating its risks.
Conclusion
As AI continues to transform parliamentary work, collaboration becomes not just
beneficial, but essential. Through shared efforts in areas such as exchanging
knowledge, developing projects and governance frameworks, and addressing
common challenges, parliaments can navigate the complex landscape of AI more
effectively.
The path forward is clear: through collaboration and coordination, parliaments can
lead the way in ethical and effective AI governance, setting a standard for
responsible AI use that extends to other sectors.
Actions
● Join the CIP’s thematic and regional networks to get peer support.
● Document and share AI use cases and implementation case studies.
27
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
28
Guidelines for AI in parliaments
Strategy
29
Guidelines for AI in parliaments
30
Guidelines for AI in parliaments
fair and inclusive practices, upholding the democratic principles that parliaments
embody. By encouraging responsible innovation, it positions parliaments at the
forefront of technological advancement in governance.
As parliaments embark on this journey, they must do so not just for their own benefit,
but also for the benefit of the citizens they serve. By governing AI responsibly, they
can pave the way to a future where technology and democracy work hand in hand,
enhancing governance and improving lives.
31
Guidelines for AI in parliaments
Key points:
● Responsible AI governance benefits both parliaments and citizens.
● Parliaments can lead the way in responsible AI use.
Ethical foundations
At the heart of AI governance lies a robust code of ethics. This code serves as a
declaration of values, guiding the use of AI and the management of the associated
risks. It should reflect parliament’s commitment to privacy, transparency,
accountability, fairness and societal well-being. Importantly, this code must align with
existing laws, regulations and parliamentary procedures. It must be adaptable and
responsive to change. It should also include real-world examples, in order to aid
understanding and adoption and to demonstrate how these ethical principles apply in
practice.
Actions:
● Create an AI code of ethics reflecting parliamentary values.
● Ensure alignment with national laws, standards, international good practice
and parliamentary procedures.
Once the decision to adopt AI has been made, building capacity within parliament
becomes crucial for success. This involves developing a robust plan for staff training
and skills development.
Thinking about how to approach the introduction of AI is also important at this stage.
It is often beneficial to start with small, manageable pilot projects, which help to build
confidence and demonstrate the value of AI in a controlled environment.
Encouraging experimentation and cross-functional collaboration can foster
innovation and drive successful AI initiatives.
Actions:
● Develop a comprehensive AI strategy aligned with parliamentary goals.
● Build internal capacity through training and skills development.
● Look for small pilot projects to build confidence and demonstrate value.
Stakeholder engagement
Even the best governance structure and code of ethics will prove ineffective without
proper stakeholder engagement. Identifying and involving stakeholders from various
levels and departments is crucial. Engaging these stakeholders early and often helps
32
Guidelines for AI in parliaments
to manage risks, identify potential pitfalls and significantly increase the chances of
successful AI implementation.
Action:
● Engage stakeholders from all relevant departments and levels.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
33
Guidelines for AI in parliaments
This sub-guideline provides guidance and recommendations for the design and
development of AI governance policies and structures in parliaments.
Background
When a parliament decides to adopt AI-based systems and services, it embarks on a
journey that requires careful planning and a multidisciplinary approach. This journey
begins with the recognition that AI governance is not just an IT issue: it is a matter
that touches every aspect of parliamentary operations.
The first step is to assemble a diverse team. Executive boards, legal departments,
and business and IT units all have crucial roles to play. This team will work together
to create a governance structure that integrates business needs, legal and regulatory
considerations, and technological insights. The exact nature of this structure will vary
from parliament to parliament, reflecting each institution’s unique culture and existing
working methods.
34
Guidelines for AI in parliaments
Within this framework, it is crucial to define key bodies. These typically include the
following:
While a central group should oversee these roles, day-to-day responsibilities can be
distributed across various areas of parliament. This might involve assigning tasks to
existing functional areas or creating new units specifically to manage AI-related
work. This approach ensures comprehensive governance while allowing for flexibility
in implementation.
By taking these steps, parliaments can create a robust governance structure that
enables them to harness the benefits of AI while effectively managing its risks and
ethical implications.
Actions:
● Assemble a multidisciplinary team from across parliament to lead AI
governance efforts.
● Choose and implement either an integrated or a dedicated AI governance
board structure.
● Define and assign key roles and responsibilities for AI governance, including
policy approval, ethical oversight and project management.
● Establish or adapt an ethics committee to address AI-specific ethical
challenges.
● Develop a clear AI life cycle management process, from project initiation to
system decommissioning.
35
Guidelines for AI in parliaments
The policy will outline roles and responsibilities for all stakeholders involved in the AI
life cycle, from data scientists to legal experts. It will also define AI-supported
business processes, prioritizing them while considering risk tolerance and regulatory
requirements. This includes specifying conditions for AI use, prohibited areas, and
approval processes for certain AI applications.
Regarding generative AI, the policy will provide clear usage guidelines and detail
necessary precautions. While recognizing potential risks, it will also encourage
innovative experimentation in a controlled manner, avoiding outright bans that might
lead to unauthorized use on personal devices.
Actions:
● Establish an AI policy working group to lead the development process.
● Outline roles, responsibilities and processes for AI governance and
implementation.
● Create guidelines for AI use, including prohibited areas and approval
processes.
● Define mechanisms for ensuring regulatory compliance.
● Develop a communication strategy to inform all staff and MPs about the AI
policy.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
36
Guidelines for AI in parliaments
Background
The journey towards effective AI governance begins with an understanding of its
importance. AI governance is about more than simply managing new technology: it is
about creating a framework that maximizes the benefits of AI while minimizing its
risks.
Creating an AI strategy
Once the foundations are established in terms of strong governance and an AI code
of ethics, and once robust engagement with key stakeholders has identified where AI
can add value to the work of parliament, it is time to turn to developing an AI strategy
grounded in this work.
37
Guidelines for AI in parliaments
As a high-level document prepared for, and agreed by, senior parliamentary leaders,
an AI strategy should use business language and arguments for business decision
makers. Many senior managers will therefore already be familiar with its format and
structure, and parliament should adopt a structure and approach it already uses, if
appropriate. Alternatively, parliament can follow the sample structure outlined below
and illustrated in Figure 1 (below):
Figure 1: Examples of goals, actions and KPIs in a parliamentary AI strategy
Vision
Formulate a clear vision statement indicating what parliament’s needs are for AI. The
nature of this statement will depend on whether the AI strategy is a stand-alone
document specific to AI, or if it is integrated into a broader parliamentary strategy. In
the latter case, there should be a single, overarching vision.
Goals
Include measurable goals that parliament can achieve using AI systems. These
goals can focus on processes, practices and resources aimed at improving or driving
AI adoption or mitigating AI-related risks.
38
Guidelines for AI in parliaments
Managing change
Adopting rigorous change management practices within the iterative development
process helps parliaments to manage resistance and ensure the smooth adoption of
AI technologies. It is essential to develop a clear change management plan and to
transparently communicate the goals of AI adoption, as well as the technology’s
impact on the organization, its staff and members. By understanding and carefully
navigating the traditionally conservative culture of parliament and demonstrating
clear, tangible benefits from the adoption of AI, it is possible to foster innovation and
drive successful AI initiatives.
Promoting innovation
AI, as a new technology with immense potential, is very much about innovation. A
good AI governance regime will include ways to promote innovative practices, taking
a strategic and nuanced approach.
The first step is to build a strong case for AI by clearly articulating its benefits and
focusing on how it can address specific business needs and challenges.
39
Guidelines for AI in parliaments
Starting with small, manageable pilot projects that have clear objectives and
measurable outcomes is crucial. This approach builds knowledge and experience,
helps to develop familiarity and trust in AI-based systems, and can demonstrate
potential, serving as a catalyst for further innovation. Of course, because pilots are
also about experimenting and testing ideas, it is important to accept that some will
inevitably fail. In other cases, it may be determined that the pilot is not worth
pursuing. Building a reflective learning process into the innovation cycle will help
parliaments to realize value and learn lessons as they go.
Innovation can be supported through the following approaches:
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
40
Guidelines for AI in parliaments
Building strong engagement is crucial for creating buy-in within parliament as well for
identifying risks, challenges and opportunities. Identifying which stakeholders need
to be engaged with and then building that engagement with them early in the
process of AI adoption is vital, helping to build support, knowledge and
understanding across parliament.
Since the organizational structure and size of parliaments varies greatly, certain
roles may not exist in specific parliaments (especially in smaller or less well-
resourced legislatures). However, as a general rule, parliament should consider
engaging on the following matters with the bodies, units or teams listed below:
41
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
42
Guidelines for AI in parliaments
43
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
44
Guidelines for AI in parliaments
For a discussion of risks that relate more specifically to the unique work of
parliaments, refer to the guideline Risks and challenges for parliaments.
45
Guidelines for AI in parliaments
Categories of risk
The integration of AI introduces new types of risk that may not be familiar to
parliaments. These can include the following:
● Lack of AI literacy
● Bias and discrimination
● Privacy invasion
● Security vulnerabilities
● Lack of accountability
● Job displacement
● Ethical dilemmas
● Shadow AI
● Lack of data sovereignty
● Lack of trust
For further discussion of these categories, refer to the sub-guideline Generic risks
and biases: Categories of risk.
Biases are part of people’s lives. They usually start with habits or unconscious
actions (cognitive biases) which, over time, materialize as technical biases (data
biases and processing biases). Such a scenario increases or creates risks that could
result in untrustful AI systems.
Biases in AI systems arise from human cognitive biases, the characteristics of the
data used or the algorithms themselves. Where AI systems are trained on real-world
data, there is the possibility that models can learn from, or even amplify, existing
biases.
In a statistical context, errors in predictive systems are the difference between the
values predicted as model output and the real value of the variables considered in
the sample. When the error occurs systematically in one direction or for a subset of
data, bias can be identified in the data treatment.
Cognitive biases
Cognitive biases are systematic errors in judgements or decisions common to
human beings owing to cognitive limitations, motivational factors and adaptations
46
Guidelines for AI in parliaments
accumulated throughout life. Sometimes, actions that reveal cognitive biases are
unconscious.
For a list of cognitive biases, refer to the sub-guideline Generic risks and biases:
Cognitive bias types.
Data biases
Data biases are a type of error in which certain elements of a data set are more
heavily weighted or represented than others, painting an inaccurate picture of the
population. A biased data set does not accurately represent a model’s use case,
resulting in skewed outcomes, low accuracy levels and analytical errors.
For a list of cognitive biases, refer to the sub-guideline Generic risks and biases:
Data bias types.
For a list of cognitive biases, refer to the sub-guideline Generic risks and biases:
Processing and validation bias types.
● Systems were built by teams that unconsciously did not involve other
organizational units owing to incorrect judgements regarding their
participation.
● Important stakeholders were not involved in the design of data-entry systems
because they had a different view than project managers.
● System interfaces favoured individual points of view or confirmed
preconceived ideas.
● Irrelevant or incomplete databases were used to train AI systems simply
because they were easy to obtain and avoided the need for negotiation
between managers from different departments.
● AI system projects that revealed decisions based on inappropriate variables
were launched anyway in order to justify the costs already incurred.
● AI system developers were so used to working with certain models that they
used them in situations where they were inappropriate.
47
Guidelines for AI in parliaments
Source: Adapted from NIST Special Publication 1270 and the Oxford Catalogue of
Bias
Below are some examples of how cognitive biases can influence and, in some
cases, even compound data or processing biases in parliamentary settings:
● Parliament feeds data sets with information from surveys and questionnaires
completed only by people sharing the same political party ideology. Here,
there is a high likelihood of existing affinity bias. Moreover, if this data set
contains data such as “opinion regarding a specific theme”, and it is used to
train an AI algorithm, there is a high possibility that such biases could be
reproduced in that AI system.
● Parliament uses only data sets from a very small number of committee
meetings to train an AI algorithm. In this case, there is a likelihood of
interpretation biases because some terms may have different meanings or
importance to different committees.
● Parliament has spent its entire innovation budget but the project team has
failed to find the best AI algorithm to solve the original problem. The team
implements an AI system anyway, launching it as a successful innovation, in
an attempt to justify the costs. This is a funding bias that results in an AI
system that is not reliable.
As the examples below show, with generative AI tools, all cognitive biases contained
in a vast data set can be combined together and exposed directly to the user:
48
Guidelines for AI in parliaments
● A generative AI tool replicates bias against female job applicants when asked
to draft letters of recommendation. Letters for male applicants often use terms
like “expert” and “integrity” while female candidates are described using terms
such as “beauty” and “delight”.
● Male researchers using a generative AI tool to create avatars receive diverse,
empowering images portraying them as astronauts and inventors. However, a
female researcher receives sexualized avatars – including topless versions,
reminiscent of anime or video-game characters – that she did not request or
consent to.
● A generative AI system fails to create appropriate images of people with
disabilities.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
49
Guidelines for AI in parliaments
This sub-guideline explores new types of risk arising from the integration of AI that
may not be familiar to parliaments and that, if not addressed effectively, can
undermine democratic processes and public trust in parliamentary institutions.
Lack of AI literacy
AI literacy is an understanding of the basic principles, capabilities and limitations of
AI – something that is crucial for informed decision-making about AI adoption and
oversight in parliaments. It involves the ability to recognize AI applications, grasp
fundamental concepts like machine learning and data analysis, and critically
evaluate AI’s potential impacts. Without adequate AI literacy, users may misinterpret
AI results, fail to recognize discriminatory patterns, become overly reliant on flawed
AI systems, and overlook ethical and legal implications. This can lead to poor
decision-making and potential harm.
Bias and discrimination
AI systems used in parliamentary functions, such as for automated decision-making
or policy analysis, can reflect and reinforce cognitive and other biases present in
their training data. This can result in skewed policy recommendations and
discriminatory legislative outcomes, adversely affecting minority groups, and
undermining the principles of equality and fairness that underpin democratic
institutions.
Privacy invasion
Parliamentary systems often handle sensitive personal and political data. Improper
data-protection measures can lead to privacy infringements when using AI for data
analysis and decision-making. Unauthorized access to, or misuse of, this data can
50
Guidelines for AI in parliaments
compromise the privacy of citizens, MPs and other stakeholders, eroding trust in
parliamentary processes.
Security vulnerabilities
AI systems, particularly those used in parliamentary settings, are potential targets for
cyberattacks. These attacks can lead to the manipulation or theft of sensitive
legislative data, but can also disrupt parliamentary operations or compromise the
integrity of legislative processes. This poses significant risks to national security and
public safety.
Lack of accountability
The opaque nature of AI decision-making – often termed the “black box” problem –
presents challenges in parliamentary contexts where transparency and
accountability are paramount. Decisions made or influenced by AI without clear
explanations can lead to difficulties in holding the right entities to account for
legislative outcomes, diminishing public trust in democratic institutions.
Job displacement
While AI can improve efficiency, the automation of administrative tasks within
parliamentary functions can lead to job and task displacement, particularly for
support and administrative staff. As AI becomes increasingly adept at handling
routine tasks such as scheduling, document processing and data analysis, the need
for human involvement in these roles may decrease. This reduction in demand can
lead to workforce downsizing, resulting in unemployment and economic disruption
for those affected.
Aside from the loss of jobs, the nature of remaining roles may change significantly.
Tasks that were once performed by human workers may be automated, leading to a
shift towards more complex, decision-oriented or creative responsibilities that require
a higher level of expertise. This evolution in job tasks can be challenging for
employees who may not have the skills or experience needed to adapt, creating
further risks of job insecurity and potential displacement.
The shift towards AI-driven processes also has the potential to increase job
polarization, where low-skill, routine jobs are automated, leaving a gap that may not
easily be filled by existing employees. This could exacerbate social and economic
inequalities, particularly if the affected workers are unable to transition into new roles
that require different skills.
Ethical dilemmas
AI applications in parliamentary settings raise ethical questions, particularly
regarding the delegation of decision-making authority. Relying on AI for policy
recommendations, legislative drafting or constituent services can lead to ethical
dilemmas, especially if AI decisions conflict with human values or lack the necessary
contextual understanding. Different AI services may report varying values depending
on the country in which the underlying model is defined and trained.
51
Guidelines for AI in parliaments
Shadow AI
Shadow AI, which is related to the concept of shadow IT, can be defined as the
unsupervised or unsanctioned use of generative AI tools within an organization or
institution outside of its IT and cybersecurity framework. Shadow AI can expose
organizations to the same risks as shadow IT: data breaches, data loss, non-
compliance with privacy and data protection regulations, lack of oversight from IT
governance, misallocation of resources, and even new risks stemming from a lack of
understanding of the technology, such as the creation of AI models with biased data
that can produce incorrect results.
The absence of clear information on how these systems respect privacy or the
nature of the data used for training further exacerbates distrust. Users may be
concerned that their data could be misused or that the AI system’s decisions are
biased or flawed owing to inadequate or biased training data. This lack of trust can
hinder the effective integration of AI in parliamentary operations, as stakeholders
may be reluctant to rely on systems they do not fully understand or trust.
The overall risk is that without trust, the benefits of AI may not be fully realized, as
users may resist or underutilize these systems, potentially leading to inefficiencies
and a failure to achieve the intended improvements in parliamentary processes.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
52
Guidelines for AI in parliaments
Automation bias
Automation bias occurs when conclusions drawn from algorithms are valued more
highly than human analyses. For example, people will often blindly follow satellite
navigation systems and arrive at the wrong place, or cross dangerous streets and
put their life at risk.
Implicit bias
Implicit bias refers to the practice by which people unconsciously associate
situations with their own mental model of representing that situation. For example,
people often assume that a younger colleague cannot be experienced enough to be
a good manager, or that an older employee is not able to learn new skills.
In-group favouritism
In-group favouritism occurs when someone acts with partiality towards the existing
aspects of the group to which they belong. For example, people may systematically
53
Guidelines for AI in parliaments
recommend someone from their “group” for a job, while sports fans will always view
their team as the best.
Out-group favouritism
Out-group favouritism refers to the favouring of groups outside the group to which a
person belongs. For example, a manager who does not recognize the talent
available in their own team will always turn to someone from another team for advice
or support.
Affinity bias
Affinity bias happens when someone prefers individuals who are similar to them in
terms of ideology, attitudes, appearance or religion. For example, a hiring manager
might prefer a candidate who went to the same university as they did, overlooking
other qualified applicants.
Social bias
Social bias occurs when many individuals in a society or community share the same
bias. The simplest examples are religion and politics. Some people are so closed in
a belief system that they are incapable of seeing both sides of an argument. They
seek only information that supports their belief and negate anything that counters it,
demonstrating their bias in their every action.
Requirement bias
Requirement bias refers to the assumption that all people or situations are capable
of meeting, or meet, the same technical requirements (hardware and/or software). It
is a subset of “rules and systems bias”.
Anchoring bias
Anchoring bias occurs when people rely too much on pre-existing information, or on
the first information they find, when making decisions. For example, if someone sees
a computer that costs $5,000, and then see a second one that costs $2,000, they are
likely to view the second computer as cheap. This type of bias can impact
procurement decisions.
Availability bias
Availability bias is a mental shortcut whereby people tend to ascribe excessive
weight to what is readily “available” – i.e. what comes easily or quickly to mind –
54
Guidelines for AI in parliaments
when making judgements and decisions. For example, people remember vivid
events like plane crashes over more common incidents such as car crashes, despite
the latter being much more common. As a result, they often overestimate the
likelihood that a plane will crash and might even choose to drive rather than fly, even
though they are much more likely to be involved in a road traffic accident. This type
of bias can occur when business staff are describing business rules to developers.
Confirmation bias
Confirmation bias refers to the fact that people tend to prefer information that
confirms their existing beliefs. It affects how people design and conduct surveys,
interviews or focus groups, and analyse competition. Essentially, people construct
questions in a way that will produce the answers they want. For example, if someone
types the question “Are dogs better than cats?” into an online search engine, articles
that argue in favour of dogs will appear first. Conversely, the question “Are cats
better than dogs?” will produce results in support of cats. This applies to any two
variables: the search engine “assumes” that the person thinks variable A is better
than variable B, showing them results that agree with their opinion first.
Groupthink
Groupthink refers to the fact that people in a group tend to make non-optimal
decisions based on their desire to conform to the group, or for fear of dissenting. For
example, when the leader of a group tells everyone that they need to ban all
members of a particular ethnic group from joining them, the members of the group
accept that decision without questioning it.
Funding bias
Funding bias occurs when biased results are reported in order to support or satisfy
the organization funding a piece of research. For example, a study published in a
scientific journal found that drinks containing high-fructose corn syrup did not
increase liver fat or ectopic fat deposition in muscles. However, the
“acknowledgements” section shows that one of the researchers received funding
from a major soft-drinks company. The results may therefore have been skewed to
paint the funding organization in a positive light.
55
Guidelines for AI in parliaments
Rashomon effect
The Rashomon effect is a term derived from the classic 1950 Japanese film
Rashomon, which explores the concept of subjective reality and the nature of truth
by presenting differing accounts of a single event from the perspectives of multiple
characters. This bias occurs when there are differences in perspective, memory and
recall, interpretation, and reporting on the same event from multiple witnesses. For
example, people who attended a legislative committee meeting might have different
perceptions regarding the debate and, therefore, provide a different summary of the
event.
Streetlight effect
The streetlight effect refers to the fact that people tend to search only where it is
easiest to look, such as when data scientists develop an AI algorithm using only a
small data set (i.e. only the data they have access to) instead of considering
obtaining more complete data from other organizations.
Ranking bias
Ranking bias is a form of anchoring bias. It refers to the fact that, in a list of search
engine results, people believe that the highest-ranked results are the most relevant
and important. They will still tend to click more on the top result than others, even if
the results are ranked randomly.
Ecological fallacy
The ecological fallacy refers the drawing of conclusions about individuals based on
group-level data. For example, if a specific neighbourhood has a high crime rate,
people might assume that any resident living in that area is more likely to commit a
crime.
Survivorship bias
Survivorship bias is when people focus on the items, observations or people who
“survive” (i.e. make it past a selection process), while overlooking those who do not.
For example, by assessing only “surviving” businesses and mutual funds, analysts
record positively biased financial and investment information – omitting the many
companies that failed despite having similar characteristics as the successful ones.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
56
Guidelines for AI in parliaments
This sub-guideline focuses on data biases – a type of error in which certain elements
of a data set are more heavily weighted or represented than others, painting an
inaccurate picture of the population. A biased data set does not accurately represent
a model’s use case, resulting in skewed outcomes, low accuracy levels and
analytical errors.
Selection bias
Selection bias occurs when selecting data.
In one example, an AI system for detecting Parkinson’s disease was trained using a
data set containing only 18.6% women. Consequently, the rate of accurate detection
of symptoms was higher among male than female patients even though, in reality,
the symptoms in question are more frequently manifested by female patients.
In another example, an AI system for detecting skin cancer was not able to detect
the disease in people of African descent. Researchers observed that, as rates of skin
cancer were increasing in Australia, the United States and Europe, the data set used
to train the system consisted largely of people of European descent.
Sampling bias
Sampling bias is a form of selection bias in which data is not randomly selected,
resulting in a sample that is not representative of the population. For example, if a
poll for a national presidential election targets only middle-class voters, the sample
will be biased because it will not be diverse enough to represent the entire
electorate.
57
Guidelines for AI in parliaments
Coverage bias
Coverage bias is a form of sampling bias that occurs when a selected population
does not match the intended population. For example, general national surveys
conducted online may miss groups with limited internet access, such as the elderly
and lower-income households.
Participation bias
Participation bias is a form of sampling bias that occurs when people from certain
groups decide not to participate in the sample. It occurs when the sample consists of
volunteers, which also creates a bias towards people who are willing and/or available
to participate. The results will therefore only represent people who have strong
opinions about the topic, omitting others.
Popularity bias
Popularity bias is a form of sampling bias that occurs when items that are more
popular gain more exposure, while less popular items are underrepresented. For
example, recommendation systems tend to suggest items that are generally popular
rather than personalized picks. This happens because the algorithms are often
trained to maximize engagement by recommending content that is liked by many
users.
Data inaccuracy
Data inaccuracy is a result of failures in data entry. For example, with a system that
registers temperature automatically, if there is a failure in the sensor, the data set will
not be trustful for using temperature as a variable. Sometimes, systems are not rigid
with data entry and accept data without standards or with errors.
Obsolete data
Obsolete data is data that is too old to reflect current trends. For example, a system
designed to predict how long a public procurement exercise will take is trained on an
excessively large data set, consisting mostly of procurement exercises that
happened 10 years ago under different legislation. As a result, this system will likely
produce inaccurate predictions.
58
Guidelines for AI in parliaments
Temporal bias
Temporal bias occurs when the training data is not representative of the current
context in terms of time. For example, census data – which is only collected once
every 10 years – is used for many predictions. However, if the last available census
data was collected in 2021, i.e. in the middle of the COVID-19 pandemic, then
algorithms that use this data may be biased in a number of ways.
Confounding variable
A confounding variable, in research investigating a potential cause-and-effect
relationship, is an unmeasured third variable that influences both the supposed
cause and the supposed effect. For example, when researching the correlation
between educational attainment and income, geographical location can be a
confounding variable. This is because different regions may have varying economic
opportunities, influencing income levels irrespective of education. Without controlling
for location, it is impossible to determine whether education or location is driving
income.
Simpson’s paradox
Simpon’s paradox is a phenomenon that occurs when subgroups are combined into
one group. The process of aggregating data can cause the apparent direction and
strength of the relationship between two variables to change. For example, a study
shows that, within an organization, male applicants are more successful than
women. However, comparing the rates within departments paints a different picture,
with female applicants having a slight advantage over men in most departments.
Linguistic bias
Linguistic bias occurs when an AI algorithm favours certain linguistic styles,
vocabularies or cultural references over others. This can result in output that is more
relatable to certain language groups or cultures, while alienating others.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
59
Guidelines for AI in parliaments
This sub-guideline focuses on processing and validation biases, which arise from
systematic actions and can occur in the absence of prejudice, partiality or
discriminatory intent. In AI systems, these biases are present in algorithmic
processes used in the development of AI applications.
Aggregation bias
Aggregation bias arises when a model assumes a one-size-fits-all approach for
different demographic groups that, in reality, may have different characteristics or
behaviours.
Amplification bias
Amplification bias occurs when several AI systems, each with separate biases
influenced by their training data and programming, interact and mutually reinforce
each other’s biases, leading to a more pronounced and persistent bias than what
any single system might display.
For instance, a system trained on historical hiring data, in which male candidates
have been predominantly selected, unintentionally favours male candidates during
CV screening. Another AI system, tasked with performance evaluation, has been
trained on data where female employees were often given lower scores owing to
latent human biases. As these two systems interact, the hiring AI system may
propose a larger number of male candidates, while the performance-evaluation AI
system continues to judge female employees more harshly.
60
Guidelines for AI in parliaments
Deployment bias
Deployment bias – perhaps more of an operational failing than a bias – occurs when
a system that works well in a test environment performs poorly when deployed in the
real world owing to differences between the two environments.
Evaluation bias
Evaluation bias is a type of discrimination in which the methods used to evaluate an
AI system’s performance are biased, leading to incorrect assessments of how well
the system is working.
Optimization bias
Optimization bias occurs when the objective function of an AI system is defined in a
way that leads to unintended consequences or unfair outcomes.
Proxy bias
Proxy bias occurs when variables used as proxies for protected attributes (such as
race or gender) introduce bias into the model.
Temporal bias
Temporal bias occurs when training data becomes outdated and no longer
represents current realities, leading to biased predictions. While this might be
considered a data bias, it is also a processing/validation bias because it often occurs
when systems fail to consider temporal aspects of the data validation process, or
61
Guidelines for AI in parliaments
when the process of updating and validating models fails to adequately account for
changes in the underlying data distribution over time.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
62
Guidelines for AI in parliaments
Ethical principles
Audience
This guideline is intended for senior parliamentary managers and senior IT
professionals in parliaments with responsibility for implementing AI and for its
ongoing governance and management.
This guideline and its sub-guidelines present a range of ethical principles related to
AI. They discuss how AI can be implemented ethically across parliamentary
processes and practice, at all levels of the institution. Ethical principles for AI are
explored across eight areas:
• Privacy
• Transparency
• Accountability
• Fairness and non-discrimination
• Robustness and safety
• Human autonomy and oversight
• Societal and environmental well-being
• Intellectual property
The code of ethics should be explicit about what parliament expects from the
operation of AI systems and from the people involved in their production and use. It
should align with relevant national and international laws, regulations and standards.
It should include recommendations, guidance and limitations for each ethical
63
Guidelines for AI in parliaments
principle, which should apply throughout the entire AI system life cycle – from
planning to decommissioning.
The following section presents a model that parliaments can adopt if they wish. In
this model, ethical principles for parliamentary AI use are broken down into eight
areas:
● Privacy: AI systems should respect and uphold privacy rights and data
protection.
● Transparency: People should be able to understand when and how they are
being impacted by AI, through transparency and responsible disclosure.
● Accountability: It should be possible to identify who is responsible for the
different phases of the AI system life cycle.
● Fairness and non-discrimination: AI systems should be inclusive,
accessible and not cause unfair discrimination against individuals,
communities or groups.
● Robustness and safety: AI systems should reliably operate in accordance
with their intended purpose.
● Human autonomy and oversight: AI systems should respect people’s
freedom to express opinions and make decisions.
● Societal and environmental well-being: AI systems should respect and
promote societal well-being and environmental good.
● Intellectual property: AI systems should respect intellectual property rights.
64
Guidelines for AI in parliaments
Each of these areas is explored in turn in the remainder of this guideline, and in the
associated sub-guidelines, which describe the specific challenges and
considerations for parliaments, and offer practical guidance, actionable strategies
and recommendations.
Privacy
This sub-guideline explores the principle of privacy in AI governance for parliaments,
with a focus on personal data protection. It outlines specific privacy concerns in
various parliamentary work processes, including legislative, administrative and
citizen interaction contexts. It emphasizes the importance of justifying and limiting
the use of personal data in AI systems, and provides guidance on handling sensitive
information. Special attention is given to the challenges posed by generative AI in
processing personal and sensitive data.
For further guidance on the principle of privacy, refer to the sub-guideline Ethical
principles: Privacy.
Transparency
This sub-guideline explores the principle of transparency in AI governance for
parliaments. It defines transparency as the communication of appropriate information
about AI systems in an understandable and accessible format. The sub-guideline
addresses three key aspects of transparency: traceability, explainability and
communication.
Highlighting the importance of documenting the entire life cycle of AI systems, from
planning to decommissioning, it provides practical recommendations for
implementing transparency. These include risk assessment documentation,
standardized methods for explaining AI decisions, and clear communication about AI
system capabilities and limitations. The sub-guideline also offers specific guidance
on ensuring transparency in generative AI applications, acknowledging the unique
challenges they present.
65
Guidelines for AI in parliaments
Accountability
This sub-guideline explores the principle of accountability in AI governance for
parliaments. It emphasizes that while AI systems themselves are not responsible for
their actions, clear accountability structures are essential.
The sub-guideline presents the principle of robustness and safety through two
lenses: resilience to failures that could cause damage to people, organizations or the
environment or that could prevent traceability, and resilience to cyberattacks.
66
Guidelines for AI in parliaments
humans, as well as the way in which information is stored, transmitted and secured.
It stresses that parliaments, as enablers of a democratic, flourishing and equitable
society, must support the user’s agency and uphold fundamental rights and that, in
an AI context, this requires human oversight.
Intellectual property
This sub-guideline explores the principle of intellectual property in AI governance for
parliaments. It emphasizes that everyone involved in an AI system’s life cycle,
including users, must respect intellectual property in order to protect the investment
of rights-holders in original content. It covers copyrights, accessory rights, and
contractual restrictions on accessing and using content.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
67
Guidelines for AI in parliaments
The following sections address specific privacy concerns raised by the diverse work
processes of parliaments.
Use of personal data
It is important to exercise caution and restraint when using personal data, and only to
do so where absolutely necessary. Parliaments should adhere to the following
principles:
68
Guidelines for AI in parliaments
officer (or equivalent person) and by key decision makers within the AI
governance framework.
• If approval for the use of personal data is given, strict practices must be put in
place to safeguard privacy and to prevent misuse. Such practices must
protect individuals from exposure, even indirectly, especially when dealing
with biometric data or when combining information from multiple sources.
• AI systems must not profile individuals according to their behaviour or use
personal data in ways that could lead to discrimination, the manipulation of
opinions, or any form of harm, whether psychological, physical or financial.
• Explicit authorization should be required for the use of sensitive data, adding
an extra layer of protection and accountability.
• Special conditions may be required for the use of personal data for research
purposes or to support bills going through parliament, especially if parliament
already has internal regulations regarding the use of personal data.
Administrative processes
Where parliament is adopting or developing AI systems, it should identify,
understand and document what data is being used – both internal data, and
externally sourced or hosted data – and identify who the owner of that data is.
Citizens’ data
When interacting with citizens, parliaments must take special care to manage and
protect the personal data they collect, such as through an online digital service or a
manual data-collection process. They must also carefully consider what data is
stored in a system that is exposed to AI, and ensure that only essential data is
retained. More generally, when designing an AI system, parliaments need to
understand the parameters of data privacy, knowing what is admissible for release
into the public domain, what must be anonymized, and what is protected.
Sensitive data and generative AI
Parliaments must exercise extreme caution and appropriate scrutiny when feeding
personal and sensitive data into generative AI systems, as these systems will
process and use any data given to them. The institution should have in place
mechanisms to protect its personal and sensitive data from inadvertent or
inappropriate access by such tools. This is especially important if this data is
processed externally, as is the case with most generative AI systems.
Where a parliament does authorize personal data for use by generative AI systems,
it should actively implement processes to anonymize this data, as well as adopting
other mechanisms, established by internal rules, before submitting any personal data
to such tools. This practice minimizes the risk of personal data breaches and misuse.
Practising privacy
In order to ensure that AI systems respect and protect privacy, parliaments should
adopt a comprehensive approach. The components of this approach are detailed
below:
69
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
70
Guidelines for AI in parliaments
Ethical principles:
Transparency
About this sub-guideline
This sub-guideline is part of the guideline Ethical principles. Refer to the main
guideline for context and an overview.
Highlighting the importance of documenting the entire life cycle of AI systems, from
planning to decommissioning, it provides practical recommendations for
implementing transparency. These include risk assessment documentation,
standardized methods for explaining AI decisions, and clear communication about AI
system capabilities and limitations. The sub-guideline also offers specific guidance
on ensuring transparency in generative AI applications, acknowledging the unique
challenges they present.
71
Guidelines for AI in parliaments
Traceability
Traceability implies the ability to follow and monitor the entire life cycle of an AI
system, from the definition of its purpose, through to planning, development, use and
ultimate decommissioning.
Architects, developers, decision makers and even users involved in the development
and evolution of AI systems are advised to use a combination of tools and
documentation to support traceability.
Explainability
Explainability is the ability for humans to understand and trust each decision,
recommendation or prediction made by an AI system.
Communication
Communication is important for transparency: humans must always know that they
are interacting with an AI system. As such, any AI system that interacts with humans
must identify itself unambiguously. It must be explained to users and practitioners, in
a clear and accessible manner, how the system functions and what its limitations
are.
72
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
73
Guidelines for AI in parliaments
Ethical principles:
Accountability
About this sub-guideline
This sub-guideline on accountability is part of the guideline on the ethical principles
for the use of AI in parliaments.
These practices are particularly important for parliaments that undergo frequent
audits. By implementing robust accountability measures, parliaments can ensure
74
Guidelines for AI in parliaments
Establish rigorous internal auditing processes for AI systems. These regular checks
help maintain system integrity and provide ongoing assurance of compliance with
ethical standards and operational requirements.
It’s equally important to prepare staff for external audits. Provide thorough training to
equip team members with the knowledge and skills needed to engage confidently
with third-party auditors, ensuring transparency and cooperation.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
75
Guidelines for AI in parliaments
Ethical principles:
Fairness and non-
discrimination
About this sub-guideline
This sub-guideline is part of the guideline Ethical principles. Refer to the main
guideline for context and an overview.
76
Guidelines for AI in parliaments
will be used to train the algorithms. Such practices should consider not only
data biases and processing biases, but also cognitive biases (for further
guidance on this topic, refer to the guideline Generic risks and biases and its
associated sub-guidelines).
• Staff training: Provide staff with training in data ethics, focusing on identifying
and minimizing biases throughout the AI development process.
• Data governance: Implement a data governance process, with a clear
delineation of responsibilities between data owners and data stewards.
• Collaboration: Have IT and business units work closely together. Such
collaboration is vital for predicting, minimizing and monitoring biases
throughout the AI system life cycle.
• Data ethics committee or team: Establish a data ethics committee or a
multi-skilled team capable of analysing potential biases and communicating
them to both managers and IT teams for each AI project.
• Diversity and inclusivity: Prioritize diversity and inclusivity when forming
project teams and data ethics committees. By bringing together individuals of
different ages, genders, ethnicities and skill sets, parliaments can ensure that
a broad range of perspectives are heard, reducing the risk of that potential
biases could be overlooked and enhancing the overall fairness of AI systems.
● Ensure that the data does not contain biases regarding political-party ideology
and previous value judgements
● Be aware of possible historical biases in data relating to committee meetings
and plenary sessions
● Establish partnerships with public organizations from which they regularly
source external data for AI-powered bill-drafting systems, in order to maintain
data quality
● Be aware of biases in text translation and speech-to-text transcription
● Confirm whether the information produced by generative AI systems is free
from biases before considering using them
● Identify data quality problems in government data and alert the government
agency in charge of the data
● Establish partnerships with government agencies in charge of the data in
order to improve data quality and minimize biases
When planning and developing AI systems for use in citizen interaction processes,
parliaments should:
77
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
78
Guidelines for AI in parliaments
Ethical principles:
Robustness and safety
About this sub-guideline
This sub-guideline is part of the guideline Ethical principles. Refer to the main
guideline for context and an overview.
The sub-guideline presents the principle of robustness and safety through two
lenses: resilience to failures that could cause damage to people, organizations or the
environment or that could prevent traceability, and resilience to cyberattacks.
Resilience to cyberattacks
Cyberattacks against AI systems exploit both algorithmic opacity and the strong
dependence of algorithms on data. Such attacks may be difficult to detect in a timely
manner, requiring AI systems security management practices that encompass
advanced prevention techniques and focus on restoring the system and the entire
environment to normal operating conditions.
Resilience to failure
Failures can occur in AI systems when variables take on unknown or false values
that the developer did not consider and did not programmatically act to prevent.
While AI systems are generally expected to be robust, if such failures do occur, there
must be a mechanism to restore the system to its normal state in a timely and
responsible manner, with minimal loss of data or impact on parliament.
79
Guidelines for AI in parliaments
By adopting this holistic approach, parliaments can create a resilient framework for
AI systems that can withstand threats, adapt to changes, and continue to serve their
intended purpose effectively and safely.
• Maintain strict control over data access, ensuring that AI systems and tools
only interact with data specifically authorized for their intended purpose. This
approach safeguards sensitive information and maintains the integrity of
parliamentary processes.
• Where data transfer to external cloud services raises security concerns or
presents other risks, explore alternative solutions. One viable option is to
employ open-source generative AI models that can run locally on a
parliament’s own systems. This strategy provides the benefits of generative AI
while offering full control over security, data management and integrity.
80
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
81
Guidelines for AI in parliaments
Moreover, in order to preserve human agency, direct interaction between human end
users and an AI system should be established in a way that avoids simulating social
relationships or stimulating potentially negative or addictive behaviour.
82
Guidelines for AI in parliaments
Human oversight
In the day-to-day running of organizations, human oversight is exercised through
human supervision of an AI system’s outputs. Users and managers responsible for
AI systems analyse this output to ascertain whether undesirable behaviours have
occurred, whether the rules established at the development stage need to be
modified, or whether there are any data biases that went unnoticed during the
development of the system.
There are three types of human supervision that can be applied to AI systems:
The distinction between these three approaches – HITL, HOTL and HIC – lies
primarily in the level of autonomy granted to the AI system and the extent of human
oversight. These are summarized in Table 1 below:
83
Guidelines for AI in parliaments
84
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
85
Guidelines for AI in parliaments
Ethical principles:
Societal and
environmental well-being
About this sub-guideline
This sub-guideline is part of the guideline Ethical principles. Refer to the main
guideline for context and an overview.
86
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
87
Guidelines for AI in parliaments
Ethical principles:
Intellectual property
About this sub-guideline
This sub-guideline is part of the guideline Ethical principles. Refer to the main
guideline for context and an overview.
● Before starting to develop an AI system, check for copyrighted data and any
contractual conditions.
● When using generative AI, check whether the presented data sources
generate copyrighted content.
● Standardize the types of documents for which generative AI cannot be used
to generate content.
● Expressly inform readers if a document’s content has been written using
generative AI.
● Protect parliament’s unpublished or sensitive work by avoiding uploading it
into an online AI system unless there are assurances that the data will not be
reused.
● Train internal staff on the technical and ethical implications of intellectual
property rights.
● Create a specific peer-review process for researchers who are using
generative AI, in order to both maintain high standards of quality and protect
against breaches of intellectual property rights.
88
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
89
Guidelines for AI in parliaments
Introducing AI
applications
Audience
This high-level guideline is intended for senior parliamentary managers, as well as
for parliamentary staff and MPs who are interested in gaining a broad understanding
of where AI can impact upon the work of parliaments.
Procurement considerations
Increasingly, many off-the-shelf software packages are being augmented with AI
functionalities. This AI integration is often opaque, and it is not immediately apparent
to the user what impact AI is having, or how it works behind the scenes:
● Microsoft 365 and Microsoft Edge products are starting to embed Microsoft
Co-pilot AI support.
● Google Docs already has new AI features such as “Help me write”, “Smart
compose”, “Summarization” and “Voice typing”.
● Adobe Acrobat has integrated several AI functionalities, including AI Assistant
for generating summaries and creating multi-document insights.
● Photo and video editing software often contains AI augmentation that makes it
easy to render manipulated images.
Checklist:
● Conduct thorough vendor assessments, focusing on AI ethics and data
practices.
90
Guidelines for AI in parliaments
Implementation strategy
A measured approach to implementation allows parliaments to assess the impact of
AI functionality and its alignment with existing processes. Starting with a pilot phase
provides an opportunity to develop clear protocols for AI feature usage and establish
monitoring mechanisms.
Checklist:
● Begin with a pilot phase to evaluate AI impact and alignment.
● Develop protocols for enabling or restricting AI features based on task
sensitivity.
● Establish monitoring mechanisms for AI-driven decisions or suggestions.
Checklist:
● Develop training programmes highlighting AI benefits and risks (including data
literacy and AI literacy).
● Create user guidelines for the appropriate usage of AI functionalities.
● Educate staff on recognizing and critically evaluating AI-generated content.
Checklist:
● Regularly review and update AI usage policies.
● Maintain open communication channels with vendors.
● Conduct periodic audits of AI feature usage and impact.
● Assess and mitigate potential biases in AI-enhanced features.
● Ensure transparency and maintain human oversight in AI use.
91
Guidelines for AI in parliaments
Checklist:
Conclusion
By following these recommendations and checklists, parliaments can harness the
benefits of AI-enhanced off-the-shelf products while mitigating risks and upholding
ethical standards. This approach aligns with the broader AI governance framework
outlined in the Guidelines, ensuring a consistent and responsible approach to AI
adoption across all areas of parliamentary work.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
92
Guidelines for AI in parliaments
This guideline stresses the importance of understanding AI’s potential benefits and
risks in a parliamentary context, as well as the ethical considerations involved. It also
highlights the need for a multidisciplinary approach and the development of a data-
driven culture within parliaments.
Overall, this guideline provides guidance for parliaments on building capacity and
preparing their staff for the effective and responsible use of AI technologies.
93
Guidelines for AI in parliaments
By strategically building staff skills and capabilities, parliaments can maximize the
benefits of AI while effectively managing its risks, avoiding both overestimation of
AI’s impact and underestimation of its challenges.
Given that AI systems rely on data, parliamentary training programmes should cover
both AI literacy and data literacy – since good-quality, well-managed and understood
data is at the heart of successful AI implementations. It is important for parliaments
to start developing these programmes before working with AI.
What is data literacy?
Data literacy is the ability to read, understand, create and communicate data as
information. It involves understanding how to effectively collect, analyse, interpret
and present data in meaningful ways. Data literacy includes knowing where data
comes from, and grasping basic statistical concepts and data presentation and
visualization techniques. Data literacy is vital for critically evaluating data-driven
arguments and conclusions.
Those requiring a more advanced level of data literacy will need to understand the
following topics:
94
Guidelines for AI in parliaments
The figure above emphasizes the importance of starting a data literacy programme
before working with AI, and why it is important that parliaments have a plan to
ensure good levels of AI literacy.
In addition, the emergence of generative AI takes the technology to the end user and
gives them the power to harness it in their work. For this reason, it is important for
staff and MPs to understand the basic tenets of data literacy and AI literacy,
including the risks and downsides, before they utilize such tools in parliament.
Data literacy in an AI context
By establishing a training programme and building a solid foundation in data
management, parliaments can harness the full potential of their data assets to
support evidence-based decision-making, foster public trust and uphold democratic
principles.
For further guidance on the training requirements for data literacy when using AI,
refer to the sub-guideline Training for data literacy and AI literacy: Data literacy in an
AI context.
Developing AI literacy
AI literacy is crucial in parliaments because it enables MPs, decision makers and
staff to make informed choices about AI adoption, shape appropriate policies and
regulations, and effectively oversee AI-driven initiatives.
95
Guidelines for AI in parliaments
Having a well-trained workforce that is familiar with at least the basic tenets of AI will
help parliament to both leverage the opportunities AI presents for enhancing
parliamentary functions and mitigate the potential risks that can occur.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
96
Guidelines for AI in parliaments
In parliaments, data has emerged as a critical asset across all business domains. As
parliaments increasingly rely on data-driven insights to fulfil their mandate, the
importance of robust data management practices cannot be overstated. Becoming
data-literate empowers parliaments to harness the full potential of their data assets,
driving informed decision-making, enhancing transparency, fostering public trust and
strengthening the foundations of democratic governance. This is fundamental to the
adoption of AI.
Data literacy training for MPs and non-technical staff
Users are important actors in a data culture, since they add data into many systems
and are often the people who come face-to-face with the data. It is important for
them to understand basic data principles and to be attuned to possible errors or
problems that can arise.
Increasingly, too, users are extracting, combining and otherwise repurposing data,
often into top-line reports or business dashboards. In all cases, this requires quality
assurance, as well as an understanding of where the data comes from and what it
means.
97
Guidelines for AI in parliaments
AI can be a powerful tool for analysing and understanding data and trends. But it can
also be unreliable, which is why data literacy is especially important in this context.
By equipping decision makers with these skills, parliaments can ensure that AI
adoption is guided by informed leadership, aligning with institutional goals while
adhering to ethical standards and best practices.
A targeted data literacy training programme for senior leaders and decision makers
could include the following:
98
Guidelines for AI in parliaments
A data literacy training programme for this population could be structured as follows,
depending on the needs of parliament:
By providing comprehensive training in this way, parliaments can ensure that their
technical staff are well-equipped to lead the responsible and effective
implementation of AI technologies, ultimately enhancing the efficiency and
effectiveness of parliamentary operations.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
99
Guidelines for AI in parliaments
Having a well-trained workforce that is familiar with at least the basic tenets of AI will
help parliament to both leverage the opportunities AI presents for enhancing
parliamentary functions and mitigate the potential risks that can occur.
100
Guidelines for AI in parliaments
By the end of this programme, MPs are equipped with the knowledge to confidently
engage in AI-related policy discussions, make informed decisions about AI adoption
in parliamentary processes, and navigate the increasingly AI-influenced landscape of
modern governance. This comprehensive yet accessible approach ensures that MPs
can harness the benefits of AI while safeguarding democratic values and protecting
the public interest.
101
Guidelines for AI in parliaments
For parliaments primarily using generative AI, the programme could be adapted to
emphasize avoiding inaccuracies, hallucinations and biases in AI outputs. It could
also stress the importance of clear guidelines to protect against adversarial prompts
and to maintain information security.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
102
Guidelines for AI in parliaments
Planning and
implementation
103
Guidelines for AI in parliaments
Project portfolio
management
Audience
This guideline is intended for parliamentary staff including senior leaders who may
not have a technical background in AI but who are involved in the management or
oversight of AI projects.
This guideline therefore aims to equip parliaments with the knowledge and tools they
need to navigate the complexities of AI project portfolio management, ensuring that
AI technologies are implemented successfully and in line with ethical principles.
104
Guidelines for AI in parliaments
language models (LLMs) available in the field of generative AI, parliaments must
carefully evaluate and select the most suitable AI technologies to support their
objectives.
Effective AI PPM involves ensuring that AI initiatives are aligned with organizational
goals, while also prioritizing transparency, accountability and ethical considerations.
By adopting a proactive approach to AI PPM, parliaments can harness the full
potential of AI to drive innovation, efficiency and effectiveness in legislative
processes and governance.
At its core, AI PPM involves identifying, evaluating and prioritizing AI projects based
on their potential value, feasibility and alignment with strategic objectives. Each
potential or existing AI project can be evaluated against seven key criteria:
105
Guidelines for AI in parliaments
Prioritization
Once parliament has identified and evaluated AI projects, the next important step is
to review the institution’s (potential) AI portfolio as a whole and to prioritize the
workstream:
● Evaluate how each AI project aligns with parliament’s overall strategic goals
and rank them according to potential value and impact.
● Assess resource availability, conflicts and constraints.
● Evaluate the potential benefits and risks of each AI project and prioritize those
with favourable risk-reward ratios.
● Identify projects that are prerequisites for others or that could create synergies
if implemented together, and consider prioritizing those that unlock value in
other projects or create a foundation for future initiatives.
● Understand the time sensitivity of each project, looking to balance quick wins
and long-term strategic value.
● Assess the impact of each project on key stakeholders (both internal and
external) and prioritize those with high levels of stakeholder support.
Other approaches that can be used for PPM include Balanced Scorecard and
Theory of Constraints.
Parliaments can use their existing methodologies to support AI PPM, or they can
adopt an established external framework that fits well with their culture and working
methods.
106
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
107
Guidelines for AI in parliaments
Project portfolio
management: The STEP
approach
About this sub-guideline
This sub-guideline is part of the guideline Project portfolio management. The main
guideline should be read first for context and an overview.
108
Guidelines for AI in parliaments
Segmentation
Segmentation offers a systematic framework for analysing activities within
parliamentary proceedings by delineating related processes and tasks.
Crucially, this approach emphasizes that parliamentary staff should take the lead in
task segmentation, leveraging their domain expertise to identify suitable tasks for AI
integration. It also underscores the importance of staff experimentation with AI tools
before widespread adoption, enabling them to assess suitability, usability and
effectiveness in real-world contexts.
109
Guidelines for AI in parliaments
Parliamentary staff can be enrolled on certifying courses that teach them the
necessary knowledge and skills to effectively leverage AI solutions in their roles.
These skills could include fine-tuning documents or data priority from the
organization, mastering prompt engineering to create effective commands or
prompts for AI systems, and evaluating the validity of predictions made by these
systems.
Performance evaluation
The performance evaluation phase acknowledges that the introduction of AI systems
in parliaments represents a shift rather than just a lift in operational dynamics.
Performance evaluation encompasses the following aspects:
110
Guidelines for AI in parliaments
Summary
The STEP approach provides a structured and adaptable AI PPM method for
parliaments. It helps ensure alignment with organizational goals, optimizing resource
allocation and maximizing the overall impact of AI investments. By adopting this
approach, parliaments can navigate the complex landscape of AI projects with
confidence, driving innovation, efficiency and effectiveness in legislative processes
and governance.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
111
Guidelines for AI in parliaments
Data governance
Audience
This guideline is intended for parliamentary staff involved in the oversight and
implementation of AI-based systems, including business managers, chief information
officers, chief technology officers and IT managers. It will also be useful for senior
parliamentary managers responsible for AI governance.
It is crucial to protect this data from unauthorized access and misuse. Improving data
quality and enhancing data protection are key components of an organization’s AI
governance strategy. Achieving these improvements requires coordinated actions
and agreements among various stakeholders involved in data-related decisions and
processes.
Data governance plays a crucial role here. It involves coordinating and managing the
efforts of all stakeholders to enhance data quality and protect privacy. By effectively
implementing data governance practices, organizations can ensure they are
developing AI systems that are reliable and trustworthy.
112
Guidelines for AI in parliaments
Data quality
Data quality refers to certain features of data that make it accessible, useful and
reliable to support effective decision-making. Data must be demonstrably accurate,
complete, consistent, accessible, relevant and secure.
For further discussion of this subject, refer to the sub-guideline Data governance:
Data quality.
For further discussion of this subject, refer to the sub-guideline Data governance:
Personal data protection.
For further discussion of this subject, refer to the sub-guideline Data governance:
Data governance in a parliamentary context.
For further discussion of this subject, refer to the sub-guideline Data governance:
Data management for AI systems.
113
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
114
Guidelines for AI in parliaments
This sub-guideline explains why data quality matters, explores the dimensions of
data quality, and examines both the benefits of high-quality data and the risks
associated with low-quality data.
115
Guidelines for AI in parliaments
● Data integrity issues: Errors in data entry or processing can compromise the
accuracy and reliability of data.
116
Guidelines for AI in parliaments
● Duplicate data: Having multiple records for the same entity can lead to
confusion and errors in reporting and analysis, making it difficult to keep
different versions of data in sync across operations.
● Inconsistent data: Variations in data formats or standards across different
systems can cause integration issues and inaccuracies.
● Outdated data: Using old or obsolete data can lead to decision-making based
on irrelevant information.
● Incomplete data: Missing data fields can lead to gaps in analysis and hinder
comprehensive decision-making.
● Ambiguous data: Data that is unclear or lacks context can be misinterpreted,
leading to incorrect conclusions.
● Misinformed decisions: Inaccurate or incomplete data can lead decision
makers to make the wrong choices.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
117
Guidelines for AI in parliaments
Data governance:
Personal data protection
About this sub-guideline
This sub-guideline is part of the guideline Data governance. Refer to the main
guideline for context and an overview.
While the exact rules will vary in each case, such regulations generally impose the
following requirements:
118
Guidelines for AI in parliaments
Sensitive data
There may be stronger legal protections for more sensitive information, such as the
following:
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
119
Guidelines for AI in parliaments
Data governance:
Parliamentary context
About this sub-guideline
This sub-guideline is part of the guideline Data governance. Refer to the main
guideline for context and an overview.
120
Guidelines for AI in parliaments
Data owner
Every piece of data should have a data owner, who is in charge of decisions
regarding data protection, data storage, data classification, data access, data
formats, metadata, and all improvements necessary to make the data useful for
parliament’s needs.
Data steward
Data owners designate a data steward to support them with managing data quality,
ensuring compliance with policies, and facilitating communication between data
owners and users.
Establishing data ownership
Practices for establishing data ownership can include the following:
121
Guidelines for AI in parliaments
● Appoint one or more data stewards to support them in their role and to
facilitate the implementation of the data ownership guidelines.
● Allocate resources to information ownership objectives.
● Apply operational guidelines and procedures for information ownership.
● Establish and measure data performance metrics and communicate about
actual data performance.
● Establish future data requirements based on strategies and business trends.
● Position and manage data as a corporate asset.
Responsibilities of data stewards
The responsibilities of data stewards – some from business units, others from the IT
unit – are as follows:
122
Guidelines for AI in parliaments
● Implement and maintain the infrastructure needed to deliver data from its
point of capture and storage to a point of need.
● Manage the availability of systems to access, retrieve and manipulate data.
● Ensure the integration and consistency of data across multiple applications
and sources.
● Ensure that backup and recovery procedures are in place to prevent data
loss.
● Secure data access and provide up-to-date solutions to protect against
malicious code.
● Implement an IT helpdesk function for the following purposes:
○ Logging data issues
○ Escalating data issues to the appropriate information owner
○ Connecting users with second-line support to help interpret data fields
and/or information within data fields
○ Granting controlled access to data to authorized users
○ Optimizing operational efficiency and effectiveness
○ Monitoring and reporting data issues
Responsibilities of parliamentary users
Parliamentary users of AI-based systems have the following responsibilities:
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
123
Guidelines for AI in parliaments
Introduction
Parliaments that are planning to implement data management for AI systems should
consider the following prerequisites, which are explained in more detail below:
124
Guidelines for AI in parliaments
Below are some examples of metadata that could be captured for parliament’s
corporate data assets:
125
Guidelines for AI in parliaments
● Data profiling:
○ Analysing data structure and contents
○ Identifying patterns, inconsistencies and anomalies in data
● Data quality requirements definition:
○ Establishing quality metrics and criteria (precision, completeness,
consistency, uniqueness)
○ Defining business standards and rules to guarantee data compliance
● Data validation:
○ Applying business rules to validate data precision and consistency
○ Verifying whether the data meets the defined requirements
● Data cleansing/correction:
○ Fixing or removing incorrect, incomplete or duplicated data
○ Standardizing data formats
● Data integration:
○ Combining data from different sources and ensuring it remains
consistent and correct
○ Solving data conflicts and eliminating duplications
● Data enrichment:
○ Incorporating additional information in order to increase data
usefulness and completeness
● Data-quality monitoring:
○ Implementing continuous processes to monitor data quality
○ Using dashboards and reports to track data quality rates
126
Guidelines for AI in parliaments
The main steps in implementing a personal data protection process are as follows:
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
127
Guidelines for AI in parliaments
Security management
Audience
This guideline is intended for senior technical staff involved in the development and
implementation of AI systems. Some of the material it contains may also be relevant
to senior parliamentary managers looking to gain a better understanding of technical
issues relating to security.
AI systems are being deployed in many different ways, always with the aim of
helping professionals increase their productivity. Parliaments are undoubtedly going
to follow this trend, aiming for faster processes that accelerate democracy without
harming or hurrying debate. Processes are also expected to be also safer, since
parliaments are potentials target for national and international interest groups.
The deeper the knowledge someone has about AI, the easier it will be for this person
to come up with a possible way of misleading the system and turning a breakthrough
technology into a personal weapon to threaten different actors.
Moreover, even organizations that do not use AI models and systems are at risk,
because criminals are already using AI in an attempt to increase the success rate of
128
Guidelines for AI in parliaments
Considering the rise in cyberattacks, which surged after the COVID-19 pandemic,
and the increasing use of AI models, which are the new “holy grail” of technology,
overcoming AI threats is an important part of an organization’s cybersecurity plan.
Attacks can occur in any phase, from data preparation through to AI system
development, deployment and operation (for further discussion of this subject, refer
to the guideline Systems development). As a result, the entire AI system life cycle
should be properly supervised in order to minimize unexpected behaviours.
For further discussion of this subject, refer to the sub-guideline Security
management: Threats.
• Technical controls
• Organizational controls
129
Guidelines for AI in parliaments
• Human controls
• Physical controls
Together, measures across these four areas enable parliaments to enhance the
protection of their AI systems.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
130
Guidelines for AI in parliaments
Security management:
Parliamentary context
About this sub-guideline
This sub-guideline is part of the guideline Security management. Refer to the main
guideline for context and an overview.
131
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
132
Guidelines for AI in parliaments
Security management:
Threats
About this sub-guideline
This sub-guideline is part of the guideline Security management. It should be read in
conjunction with the sub-guideline Security management: Good practices. Refer to
the main guideline for context and an overview.
Background
AI systems learn from the data they are fed and then apply models to help them
make decisions, generate new content or do anything else they are programmed to
do.
For this reason, it is essential that they are fed with correct, clean, unbiased data (for
further discussion of this subject, see the guideline Generic risks and biases). Any
change to that data, whether intentional or not, may lead to unexpected
consequences, results (e.g. a budget management system) or behaviour (e.g. an
autonomous car). The end result is akin to teaching improper behaviour or giving
wrong information to a child throughout their life.
It is also important to pay attention to the system that will receive user input, send
this input to the AI system and then return the result to users. In some cases,
attackers can exploit this chain of communication.
Attacks can occur in any phase, from data preparation through to AI system
development, deployment and operation (for further discussion of this subject, see
the guideline Systems development). As a result, the entire AI system life cycle
should be properly supervised in order to minimize unexpected behaviours.
Types of attacks
There are nine common types of attacks on AI systems:
● Adversarial attacks
● Evasion attacks
● Transfer attacks
● Data poisoning attacks
133
Guidelines for AI in parliaments
Evasion attacks
An evasion attack is considered to be a specific type of adversarial attack. In this
case, the attacker intentionally crafts the input data to evade AI detection or
classification. For example, the attacker may change the way an unsolicited email is
written to avoid being detected by the AI anti-spam system. This can lead to a
malicious message getting through to a regular user, who might click a fake link and
allow an attacker to gain access to an organization’s network.
134
Guidelines for AI in parliaments
Transfer attacks
A transfer attack occurs when an attacker uses adversarial-type attacks developed
for one model and to deceive other models. The consequences are the same as for
an adversarial attack.
Data poisoning attacks
In a data poisoning attack, the attacker adds data to the data set used to train an AI
model. The model will learn from incorrect information, leading it to make wrong
decisions. For instance, a system could wrongly diagnose a healthy patient as
having a deadly cancer – or, worse still, wrongly diagnose a patient with cancer as
being healthy, preventing the person from receiving proper treatment. In a
parliamentary context, a proposal could be forwarded to the wrong committee for
discussion.
135
Guidelines for AI in parliaments
could get the result of a specific patient’s blood test, or access other, more sensitive
data. In a parliamentary context, if an AI model is trained on secret voting data, an
attacker may be able to obtain information about how an MP voted.
136
Guidelines for AI in parliaments
In a parliamentary context, a clerk might use an AI assistant (such as the one pre-
installed on their mobile phone) to assist them with daily tasks. However, the results
this AI assistant produces may be biased according to the clerk’s political or other
personal preferences.
The problem may become more serious if the AI assistant is attacked by a group
with particular preferences on any matter parliament may be discussing, especially if
this matter is sensitive. Likewise, if parliament decides to develop an AI assistant to
support citizens on legislative matters, it must be considered a target for
cyberattacks – not least because its audience is often unknown. In this case,
prompts need to be at least sanitized prior to their submission as an input to the AI
model, in the same way as inputs to any other AI-enabled system.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
137
Guidelines for AI in parliaments
Security management:
Good practices
About this sub-guideline
This sub-guideline is part of the guideline Security management. It should be read in
conjunction with the sub-guideline Security management: Threats. Refer to the main
guideline for context and an overview.
• Adversarial attacks
• Evasion attacks
• Transfer attacks
• Data poisoning attacks
• Model inversion attacks
• Membership inference attacks
• Distributed denial of service (DDoS) attacks
• Data manipulation attacks
• Misuse of AI assistants attacks
Adversarial attacks
Countermeasures for adversarial attacks, especially those targeting image
recognition systems, include the following:
• Use trickier examples in the training phase. For instance, show the AI model
lots of slightly altered images so that it learns not to be fooled by them.
• Add a little randomness (technically known as “noise”) to the images used in
the training data set. That way, the model will learn to focus on the important
parts of the image, not just on the small details that can be easily changed.
• Use stronger models: design the model so that it looks at the big picture (like
a person’s overall shape) rather than just focusing on small details.
138
Guidelines for AI in parliaments
Evasion attacks
Countermeasures for evasion attacks include the following:
• Choose strong models that are less likely to be fooled by slightly altered
inputs.
• Check or validate the input data system to ensure that it is clean and as
expected. This can help to catch any abnormal or malicious inputs before they
cause harm.
• Train the model using examples of these tricky inputs so that it learns to
recognize and handle them properly.
• Regularly monitor how the model performs and update it to handle new types
of attacks as they are discovered.
• If possible, and provided that the benefits outweigh the increased cost, use
multiple models in combination so that if one model is fooled, the other
models can still catch the problem.
Transfer attacks
Countermeasures for transfer attacks include the following:
• Train the model on a wide variety of data. This reduces the chances that an
attack crafted on another model will work on the models that parliament is
using.
• As with evasion attacks, use multiple models to make decisions – provided
that the benefits outweigh the increased cost.
• During training, expose the model to adversarial examples (small, intentionally
crafted changes in input designed to fool the model). This helps the model
learn to recognize and defend against such attacks.
• If feasible, frequently update and retrain the model with new data. This can
help to close any vulnerabilities that might be exploited in transfer attacks.
• Use techniques designed to make the model more resistant to attacks, such
as smoothing or noise injection during training.
• Take care over who has access (physical or logical) to the training data set,
enforcing robust user permissions.
• Carefully check the data before using it to train the model, including ensuring
that the labels and data make sense. Proper sanitization is important for
getting rid of data that may negatively impact the learning process.
• Evaluate the machine learning algorithms and check if they are designed to
be less sensitive to corrupted data.
• Monitor the model’s performance after deployment in order to detect any
unusual behaviour that might suggest it was trained on poisoned data.
139
Guidelines for AI in parliaments
• Use a content delivery network (CDN) to distribute the service across servers
in different locations.
• Install a web application firewall to detect and block malicious traffic, including
a DDoS attack, before it reaches the servers running the AI system.
• Increase server capacity. While this will not solve the problem itself, it will
make it more difficult for the attacker to crash the entire system.
• Use externally sourced DDoS protection services to detect and mitigate DDoS
attacks. These services can automatically identify and block malicious traffic,
keeping a system running smoothly.
140
Guidelines for AI in parliaments
• Limit the number of requests a single user can make within a given time
frame.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
141
Guidelines for AI in parliaments
Security management:
Implementing
cybersecurity controls
About this sub-guideline
This sub-guideline is part of the guideline Security management. Refer to the main
guideline for context and an overview.
• Technical controls
• Organizational controls
• Human controls
• Physical controls
Together, measures across these four areas – which are discussed in turn below –
enable parliaments to enhance the protection of their AI systems.
Technical controls
Technical controls are measures and processes designed to protect AI systems,
data and algorithms from unauthorized access, tampering and exploitation.
Network security
• Use firewalls to segment the network into different zones based on security
requirements, implementing strict access controls between zones.
• Consider deploying intrusion prevention systems (IPS) to detect and block
malicious activities in real time as soon as possible.
• Ensure that all communication channels – including data transfers, model
updates and application programming interface (API) calls – are encrypted in
order to protect data in transit.
142
Guidelines for AI in parliaments
• Use robust encryption protocols such as transport layer security (TLS) and
secure sockets layer (SSL).
• Use virtual private networks (VPNs) to secure remote access to AI systems
and data.
System security
• Regularly update and patch all software, operating systems and AI algorithms
to protect against known vulnerabilities.
• Be careful to test patches in a controlled environment before deploying them
to production systems to ensure they do not introduce new vulnerabilities or
cause system instability.
• Deploy robust and regularly updated antivirus and anti-malware solutions on
all endpoints, including servers, workstations and mobile devices.
• Enable real-time protection features to detect and block malware and other
threats as they occur.
Data security
• Make sure training data sets are reliable and keep these data sets secure, as
they are one of the most important assets of the AI system.
• Use data from reliable and verified sources to ensure the authenticity and
accuracy of the information.
• If using third-party data, ensure that the data provider has undergone rigorous
security audits.
• Remove personally identifiable information from data sets to ensure privacy.
• If this is not possible, replace sensitive data with pseudonyms that can be
traced back to the original data only through secure means.
• Encrypt data stored in databases, and in cloud-storage and backup systems,
using strong encryption algorithms.
• Use encryption protocols such as TLS or SSL to protect data when it is
transmitted between systems or users.
• Pre-process the data to apply sanitization using a variety of methods such as
data anonymization, pseudonymization and data masking (for further
discussion of this subject, see the guideline Data management).
• Where necessary, establish data-sharing agreements and protocols with
trusted partners (such as other parliaments) to ensure the integrity and
security of shared data sets, and use secure communication channels when
sharing data or collaborating with external parties.
Application security
• Implement systems development best practices to resolve known
vulnerabilities and be ready for unknown ones (for further discussion of this
subject, see the guideline Systems development).
Organizational controls
Organizational controls focus on internal policies, procedures and practices.
143
Guidelines for AI in parliaments
Incident response
• Establish well-defined procedures at a time when the system is not under any
real threat. That way, the team can think, discuss and come up with a
response plan that is not rushed by the imminent danger.
Human controls
Humans are one of the weakest links in the chain of an AI system, or indeed of any
system. Human controls focus on managing this risk through a range of different
measures and procedures.
Access management
• Develop a strict role-based access model, implementing the principle of least
privilege (PoLP) in order to minimize the risk of unauthorized access and data
breaches.
Accountability
• Monitor security incidents and suspicious activities, and implement clear
channels for reporting such incidents and activities (for further discussion of
this subject, see the guidelines Ethical principles and Systems development).
Physical controls
Physical controls focus on protecting physical assets and infrastructure that support
AI systems from unauthorized access, damage or interference.
144
Guidelines for AI in parliaments
Facility security
• Ensure that only authorized people have physical access to the AI system.
• Implement at least two ways to allow access the computer room, and apply
proper visitor management to sensitive areas.
Environmental controls
• Ensure that facilities hosting IT hardware and staff are protected against fire.
• Install fire detection systems and have an evacuation plan in place.
• If possible, use a climate control system to keep all computers at appropriate
temperature and humidity levels.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
145
Guidelines for AI in parliaments
Risk management
Audience
This guideline is intended for senior parliamentary managers and parliamentary staff
involved in the development and implementation of AI-based systems. It also
provides a more detailed technical discussion for those involved in developing and
implementing AI-related projects.
Managing these risks is essential to ensure that AI systems are safe, ethical, fair,
private, trustworthy, transparent and compliant with regulations, as well as to ensure
respect for human autonomy and intellectual property rights. In summary, effective
AI risk management practices help parliaments to:
146
Guidelines for AI in parliaments
• AI risk assessment
• AI risk analysis
• AI risk treatment
• AI risk communication
• AI risk monitoring and review
This assessment exercise will typically be carried out using questionnaires for key
stakeholders – which, in itself, can be a useful mitigation method because it exposes
potential risks and increases awareness of them.
147
Guidelines for AI in parliaments
The AI system life cycles starts with the presentation of a proposal for an AI project
to the relevant governance body (council, committee or unit), along with a completed
AI risk assessment questionnaire (Q1), which is used to gather information about the
project’s purpose, stakeholders, compliance, data, agreements, potential biases and
other factors.
Governance staff use the responses in the questionnaire to estimate the project’s
risks and benefits, and to determine, on that basis, whether authorization should be
given to add the AI project to parliament’s portfolio.
148
Guidelines for AI in parliaments
After the AI system has been deployed, it is necessary to keep monitoring the
system’s behaviour as well as changes in the variables considered in the AI system
life cycle, such as data characteristics, business rules and social considerations.
The third risk assessment questionnaire (Q3) can be used during this phase.
Similarly to the Q2 questionnaire, the risk score resulting from this third
questionnaire will inform the risk management process, which at this point aims to
reduce and mitigate AI risks in order to ensure that the system remains trustworthy.
AI risk analysis
Once the risks have been identified and assessed, the next step is to analyse these
risks in light of parliament’s AI policy and its risk appetite – often based on its
regulatory requirements and strategic objectives – in order to determine which risk(s)
require(s) treatment. All identified risks should then be ranked in order to identify
which require immediate attention and which should be monitored over time. All such
decisions should be made with the close involvement of relevant stakeholders.
Trade-offs between different risk treatment options also need to be evaluated at this
stage. For instance, by eliminating one identified risk, parliament could be at risk of
not achieving other strategic goals by also eliminating the option of using the AI
system in another, important way.
AI risk treatment
Once the identified risks have been analysed and prioritized, parliament should
develop and implement plans to manage them, possibly using one or more of the
following strategies:
● Avoid: Eliminate the activity that gives rise to the risk. For example,
parliament may decide not to implement an AI system, or even to abandon an
AI project, if the associated risks are deemed too high.
● Reduce: Take steps to reduce the likelihood of the risk occurring, or to
mitigate its impact if it does occur.
● Transfer: Transfer the risk to a third party, such as through insurance or by
outsourcing certain services to a company better equipped to manage the
risk.
● Accept: Accept the risk without taking any action to alter its likelihood or
impact. This is typically done when the cost of mitigating the risk exceeds the
potential damage, and when the risk is considered low enough to be
acceptable.
149
Guidelines for AI in parliaments
AI risk communication
Identified AI risks and associated management measures should be communicated
to relevant stakeholders throughout the AI system’s life cycle. During the
development phase, project managers and project office staff will provide regular
updates on risk status and treatment effectiveness as part of their usual remit.
Communication is equally important in the operational phase.
AI risk monitoring and review
During the operational phase, it is essential to continuously monitor and review AI
risks.
Periodic audits and reviews should also be conducted to ensure compliance with AI
policy and regulations. All identified incidents and near-misses should be analysed in
order to identify root causes and improve risk management practices, with lessons
learned documented and policies updated accordingly.
150
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
151
Guidelines for AI in parliaments
152
Guidelines for AI in parliaments
153
Guidelines for AI in parliaments
Transparency
• How are the planning, modelling, evaluation, testing and deployment phases
scrutinized?
• Does the documentation use appropriate language for the target audience(s)?
• How do the documents demonstrate that the model addresses business
requirements?
• How do the documents demonstrate that the AI system is sufficiently
accurate?
• Is there any direct interaction between human end users and the AI system?
Are users explicitly informed that they are interacting with an AI system?
Safety and robustness
• Are there any weaknesses in the defined model?
• Are there any weaknesses in the testing phase?
• Is the AI system robust to potential failures and security attacks?
• Is there a deployment plan?
• Is there a rollback or disaster recovery strategy in place?
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
154
Guidelines for AI in parliaments
Systems development
Audience
This guideline is intended for IT managers and staff, software engineers, data
scientists and technical project managers involved in designing, deploying and
maintaining AI systems in parliaments.
155
Guidelines for AI in parliaments
explain the AI system’s outcomes and ensure that the development phases respect
parliament’s rules and are compliant with regulations.
Reducing biases and discrimination
Techniques to identify groups that are to be protected from biases should be applied
throughout the process, from planning to deployment. While AI systems are in
operation, continuous monitoring helps to minimize new biases not seen in the
development phase.
Creating accountability
Through systematic steps for planning, implementing, testing and improving,
practices are delegated and approved by key stakeholders, with individual roles and
responsibilities clearly defined. The AI system’s functionality should be documented
in such a way that it can be audited.
Improving robustness and safety
The systems development process should focus on improving robustness and
safety, through a system architecture that prioritizes cybersecurity and through
extensive testing.
Maintaining human autonomy
Humans should play a continuous verification role in order to ensure that the AI
system’s outputs are reliable, both during development and following deployment in
a live environment. This human oversight will ensure that the system continues to
adhere to the ethical principles considered in the project phase, and allows for new
ethical risks to be identified.
Guaranteeing regulatory compliance
AI systems are required to comply with various legal and regulatory requirements,
established both internally and within parliament’s country or region.
For further discussion of systems life cycle and development frameworks, refer to the
sub-guideline Systems development: Systems life cycle and development
frameworks.
156
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
157
Guidelines for AI in parliaments
Systems development:
Systems life cycle and
development frameworks
About this sub-guideline
This sub-guideline is part of the guideline Systems development. Refer to the main
guideline for context and an overview.
This sub-guideline focuses on the systems life cycle and development frameworks
for AI-based systems in parliamentary contexts. It provides an overview of the AI
systems life cycle, highlighting its importance in ensuring structured and responsible
AI development.
This sub-guideline outlines the benefits of adopting a systematic life cycle approach.
It also offers guidance on evaluating external AI development frameworks, covering
aspects such as ease of use, community support, performance, model support and
deployment readiness.
Specifically, adopting an AI systems life cycle approach offers the following benefits:
158
Guidelines for AI in parliaments
Ease of use
• Documentation: Quality, clarity and comprehensiveness of the documentation
• Learning curve: How easy it is to start using the framework, including the
availability of tutorials and community support
• API design: Simplicity and intuitiveness of the API
Community and support
• Community size: The number of users and developers contributing to the
framework
• Support: The availability of forums, user groups and other support channels
• Updates: The frequency of updates, how many known issues and
vulnerabilities exist, and how actively the framework is maintained and
improved
159
Guidelines for AI in parliaments
Performance
• Speed: How quickly models can be updated for training and inference
• Scalability: The ability to manage large data sets and complex models, and
support for distributed training
• Optimization: Built-in features for optimizing and tuning model performance
and resource usage
Model support
• Model variety: The range of supported model types (neural networks, decision
trees, etc.)
• Pre-trained models: The availability and variety of pre-trained models that can
be fine-tuned or used out of the box
• Customization: Flexibility in defining and experimenting with custom models
and architectures
Tooling and integration
• Ecosystem: The availability of complementary tools for data preprocessing,
visualization and deployment
• Compatibility: Integration with data-handling libraries, visualization tools,
deployment platforms, etc.
• Interoperability: Support for importing/exporting models between different
frameworks
Deployment and production readiness
• Deployment options: Ease of deploying models to different environments
(cloud, edge, mobile)
• Serving: Support for model serving and inference in production settings
• Monitoring: Tools for monitoring model performance and detecting issues in
production
• Data protection regulations: Assurance that data classification, retention and
residency rules are followed
Licensing and cost
• Open-source versus proprietary: Whether the framework is open-source or
commercial
• Licensing terms: Any restrictions or requirements imposed by the licence
• Cost: The potential costs associated with using the framework, especially for
proprietary options
Hardware support
• GPU/TPU support: Compatibility with various hardware accelerators
• Distributed computing: Support for running on multiple GPUs or across a
cluster of machines
Extensibility
• Plugins: The availability of plugins or extensions for added functionality
• APIs for custom extensions: The ability to write custom extensions or
integrate third-party tools
160
Guidelines for AI in parliaments
Scalability
• Scaling up and out: Support for horizontal and vertical scaling (manual or
automatic)
• Performance: Load testing and simulations to measure the performance of the
framework
• Cost: The ability to set costing limits in the event that the system needs to
scale
Reproducibility
• Versioning: Tools for model and data versioning to ensure the reproducibility
of outcomes
• Experiment management: Support for tracking experiments and managing
their results
Security
• Security features: Built-in security features for safe deployment and model
usage
• Compliance: Compliance with industry standards and regulations
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
161
Guidelines for AI in parliaments
Systems development:
Deployment and
implementation
About this sub-guideline
This sub-guideline is part of the guideline Systems development. It can be read in
conjunction with the sub-guideline Systems development: Deployment patterns.
Refer to the main guideline for context and an overview.
Deployment strategy
A structured and coordinated approach to AI systems deployment is essential to the
effective integration of these systems into various parliamentary applications and
workflows. These patterns of deployment reinforce good practices, helping to ensure
that AI deployments are scalable, robust and maintainable. For further discussion of
software deployment patterns, refer to the sub-guideline Systems development:
Deployment patterns.
Parliament’s deployment strategy will depend on the degree of task automation. The
various options are discussed below:
162
Guidelines for AI in parliaments
In the above deployment cases, parliament should consider the following two basic
aspects:
163
Guidelines for AI in parliaments
164
Guidelines for AI in parliaments
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
165
Guidelines for AI in parliaments
Systems development:
Deployment patterns
About this sub-guideline
This sub-guideline is part of the guideline Systems development. It can be read in
conjunction with the sub-guideline Systems development: Deployment and
implementation. Refer to the main guideline for context and an overview.
Background
The choice of technology and infrastructure significantly influences the robustness,
security, scalability, performance and ease of supervision of AI systems. Together,
these factors contribute to the overall reliability of such systems.
Different deployment patterns come with their own set of trade-offs. Some may incur
higher costs, while others might require more extensive management resources. As
a result, there is no one-size-fits-all “best approach” to AI system deployment.
Instead, the key to successful deployment lies in carefully assessing parliament’s
needs, resources and goals, and then selecting an approach that offers the best
balance of features and practicality for parliament’s particular situation.
Request-handling patterns:
166
Guidelines for AI in parliaments
• On-premises: Models are deployed on local servers, often for the purpose
of enhanced security or to meet specific compliance requirements.
• Cloud: Models are hosted on cloud platforms, offering benefits such as
scalability, flexibility and reduced infrastructure management.
• Edge: Models are deployed on edge devices, providing low-latency
predictions and offline capabilities, making this approach suitable for
Internet of Things (IoT) and mobile applications.
• Hybrid: This approach combines on-premises, cloud and edge
deployments to optimize performance and resource usage based on
specific needs.
Scalability
It is important to understand the average number of requests the AI system will
receive, along with its life cycle. These factors will determine the deployment
scalability characteristics:
• Latency refers to the time it takes for the AI model to respond to a request,
which is particularly crucial for real-time applications.
• Throughput measures the number of requests the AI model can process
per unit of time, which is essential for high-volume applications.
167
Guidelines for AI in parliaments
168
Guidelines for AI in parliaments
169
Guidelines for AI in parliaments
• Ensure best fit: Select a deployment pattern according to the specific use
case, performance requirements and domain constraints.
• Monitor and iterate: Continuously monitor deployed models and iterate
based on user feedback and performance metrics.
• Maintain security: Implement robust security practices to protect models
and data in production environments.
• Optimize resources: Efficiently manage resources to balance
performance and cost, leveraging approaches such as containerization
and serverless architectures where appropriate.
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the
IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-
ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more
information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
170
Guidelines for AI in parliaments
Appendices
171
Guidelines for AI in parliaments
Glossary of terms
Accountability The principle that ensures clear responsibility can be
assigned for all decisions and actions throughout an AI
system's lifecycle, from planning to decommissioning
Affinity bias When someone prefers individuals who are similar to
them in terms of ideology, attitudes, appearance, or
religion
Agile A project management and development approach that
emphasizes flexibility, iterative progress, and
collaboration
AI governance The framework of policies, structures, and processes
created to maximize the benefits of AI while minimizing
its risks
AI literacy The ability to understand, critically evaluate, and
effectively interact with AI technologies, including
knowledge of AI's capabilities, limitations, and potential
impacts
AI PPM (AI Project Portfolio The centralized management of an organization's AI
Management) initiatives to meet strategic objectives by optimizing
resource allocation, balancing risks, and maximizing
value
Algorithm A set of rules or instructions given to an AI system to
help it learn, make decisions, and solve problems
Amplification bias Occurs when several AI systems, each with separate
biases, interact and mutually reinforce each other's
biases
API (Application A set of rules and protocols that allows different
Programming Interface) software applications to communicate with each other
Automation bias When conclusions drawn from algorithms are valued
more highly than human analyses
Cloud storage The practice of storing data and applications on remote
servers accessed via the internet, rather than on local
computers
Coverage bias A form of sampling bias that occurs when a selected
population does not match the intended population
172
Guidelines for AI in parliaments
173
Guidelines for AI in parliaments
174
Guidelines for AI in parliaments
SVG (Scalable Vector A web-friendly vector image format that can scale
Graphics) without losing quality
Temporal bias When training data becomes outdated and no longer
represents current realities
Traceability The ability to follow and monitor the entire lifecycle of
an AI system
Training data The initial data set used to teach an AI system to
perform its intended function
Transparency The communication of appropriate information about AI
systems in an understandable and accessible format
Use case A specific situation or scenario where an AI system or
application could be used
XAI (eXplainable AI) AI systems designed to be interpretable and
understandable by humans
175
Guidelines for AI in parliaments
This publication has been produced with the financial support of the European Union (EU), in
partnership with the International Institute for Democracy and Electoral Assistance (International
IDEA), as part of INTER PARES–Parliaments in Partnership, the EU’s Global Project to
Strengthen the Capacity of Parliaments.
The designations employed and the presentation of material in this information product do not
imply the expression of any opinion whatsoever on the part of the Inter-Parliamentary Union
(IPU) or the EU concerning the legal or development status of any country, territory, city or area,
or of its authorities, or concerning the delimitation of its frontiers or boundaries.
The mention of specific companies or products of manufacturers, whether or not these have
been patented, does not imply that these have been endorsed or recommended by IPU or the
EU in preference to others of a similar nature that are not mentioned.
All reasonable precautions have been taken by the IPU to verify the information contained in this
publication. However, the published material is distributed without warranty of any kind, either
expressed or implied. Responsibility for the interpretation and use of the material lies with the
reader. In no event shall the IPU or the EU be liable for damages arising from its use.
176