Othp 90
Othp 90
Management
Governance of AI adoption in
central banks
January 2025
© Bank for International Settlements 2025. All rights reserved. Brief excerpts may be reproduced or
translated provided the source is stated.
Foreword ............................................................................................................................................................................................... 1
1 Use of AI ............................................................................................................................................................................. 3
1.1 AI benefits for central banks and use cases ................................................................................................ 4
4 Governance ..................................................................................................................................................................... 19
4.1 Current industry frameworks .......................................................................................................................... 20
4.2 Proposed actions for AI governance at central banks .......................................................................... 21
5 Conclusions ..................................................................................................................................................................... 24
References .......................................................................................................................................................................................... 25
Artificial intelligence (AI) presents huge opportunities for central banks. At the same time, its adoption
entails complex risk management challenges. The use cases for AI span a broad range of critical functions
of a central bank including data analysis, research, economic forecasting, payments, supervision and
banknote production. The adoption of AI presents new risks and can amplify existing ones. The potential
risks are wide-ranging and include those around data security and confidentiality, risks inherent to AI
models (eg “hallucinations”) and, importantly, reputational risks. The potential risk exposure for central
banks can be significant, owing to the criticality and sensitivity of the data they handle as well as their
central role in financial markets.
This report on the governance of AI adoption in central banks provides guidance on the
implementation of AI at central banks and proposes a governance and risk management framework. A
comprehensive risk management strategy can leverage existing risk management models and processes,
in particular the well established three lines of defence model. In incorporating the specific issues around
AI and its use cases, risk managers at central banks can make use of the frameworks proposed by a number
of international bodies. A good governance framework is key for adopting AI. The report proposes an
adaptive governance framework and recommends ten practical actions that central banks may want to
undertake as part of their journey in adopting AI.
The report is the outcome of work conducted by Bank for International Settlements (BIS) member
central banks in the Americas within the Consultative Group on Risk Management (CGRM), which brings
together representatives of the central banks of Brazil, Canada, Chile, Colombia, Mexico, Peru and the
United States. The Artificial Intelligence Task Force that prepared this report was co-led by Alejandro de
los Santos from the Bank of Mexico and Angela O'Connor from the BIS. The BIS Americas Office acted
as the secretariat.
The adoption of artificial intelligence (AI) technologies in the financial sector may usher in a transformative
era for financial services, offering unprecedented opportunities for innovation, efficiency and customer
focus. As with any groundbreaking new technology, AI also introduces complex risk management
challenges for banks and central banks. Risks from the use of AI include but are not limited to operational,
information security, privacy and cyber security risks; information and communication technology (ICT)
risks; third-party risks such as external dependency; and risks inherent in AI models (eg “hallucinations”).
The materialisation of these risks can have significant reputational and financial impacts.
The identification and management of such risks can be complex owing to their variety and
interactions between different types. The analysis and management of these risks should therefore be
approached with a holistic view that incorporates a wide array of expertise to consider the full range of
risks and complex interactions.
The Bank for International Settlements Consultative Group on Risk Management (CGRM) set up
a task force to provide guidance to central banks on how AI is being used in different functions, and how
to organise and govern risk management related to AI adoption. This report provides specific suggestions
on how central banks can identify, analyse, report and manage risk associated with the adoption of AI
models and tools in their organisations. The suggestions are based on risk scenarios and risk management
practices developed in the central banking community and the financial sector at large, such as the three
lines of defence model. The report does not focus on the impact of AI on financial stability or the broader
economy, which are covered by other forums.
The report argues that central banks need to find a balance between fostering innovation using
AI and mitigating the different risks that this technology may generate. Good governance schemes for the
adoption of AI in the organisation, with a holistic view beyond technology, might help to achieve such a
balance.
The report is organised as follows. Section 1 begins with a brief overview of AI models, highlights
use cases from central bank publications and complements the discussion with answers provided by
members of the CGRM task force to a questionnaire on AI usage. Some of the benefits and uses identified
by the group include the automation of processes, analysis of large data sets and solving complex
problems. Central banks adopt AI to enhance efficiency, improve operational robustness and inform
decision-making in different areas of the organisation. This includes core functions such as economic
forecasting, payments, supervision and banknote production. Central banks are also exploring the use of
AI to provide customer and corporate services, for instance by using chatbots to answer enquires from
regulated entities or assist their own researchers. These applications demonstrate the potential of AI to
address complex challenges and support central banking operations.
Section 2 describes, from a holistic perspective, risks related to the adoption of AI for central
banks, covering both new risks raised by AI and existing risks that are amplified by its use. The section
introduces a taxonomy into strategic; operational (legal, compliance, process, people and capacity);
information security and cyber security; ICT; third party; AI model; environmental, ethical and social; and
reputational risks. It also highlights some considerations about generative artificial intelligence (GAI) and
the unintended consequences of its adoption, which can result in significant risk exposure for central
banks, if the associated risks are not well managed.
Section 3 provides some guidance for the implementation of AI models in central banks. The
report suggests a comprehensive risk management strategy that leverages existing models and processes.
It suggests updating the three lines of defence model with some considerations specific to the use of AI.
The section also suggests a process that central banks can employ to identify and analyse use cases,
1 Use of AI
The concept of artificial intelligence (AI) can be broad and mean different things in different contexts. A
number of different sources were considered as part of the process of arriving at a common definition of
AI for this report (Appendix 1). The definition adopted here is based on the widely used definition from
the Organisation for Economic Co-operation and Development (OECD), according to which:
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the
input it receives, how to generate outputs such as predictions, content, recommendations, or
decisions that can influence physical or virtual environments. Different AI systems vary in their
levels of autonomy and adaptiveness after deployment.”1
AI applications rely on different models, which perform operations based on rules, code,
knowledge and data (inputs) to learn and generate the required outputs. According to the definition of AI
above, the outputs can be predictions, content, recommendations or decisions. Some of these models are
1
OECD (2024a).
Box A
AI models
Machine learning (ML) is a subfield of artificial intelligence (AI). This technique gives systems the ability to “learn”
from the input data without being explicitly programmed to do so (ISO/IEC (2022)). ML models are widely used to
identify patterns that are not otherwise readily evident, mainly through statistical learning algorithms. ML models are
capable of improving (Sharma et al (2021)).
Deep learning is a subfield of ML. Deep learning models learn from large and complex inputs, using neural networks
with multiple neurons and hidden layers (Amazon AWS (2024)). This architecture allows deep learning models to be
more autonomous, by automatically identifying the most relevant features from the data. This allows for deep learning
models to be generalised, meaning that they can be used with different types of inputs or in different applications
(Sharma et al (2021)).
Generative AI (GAI) models are capable of generating original content in response to user-supplied inputs called
prompts. The outputs of these models can be text, images, audio, video or code. General awareness of AI increased
with the rise of GAI technologies in 2022, beginning with image generation and later with chat-based AI interfaces.
Natural language processing (NLP) models enable machines to understand natural language (linguistics) and
identify meaning and context. At the same time, these models can also generate natural text, such as phrases,
sentences or paragraphs with meaning (Khurana et al (2023)).
Large language models (LLMs) are a type of NLP trained on a very large number of parameters or quantity of data
using deep learning. Generative LLMs understand how languages work based on very large volumes of text input and
can themselves generate text based on this training (Chiarello et al (2024)).
Generative pre-trained transformers (GPTs) are a type of LLM that are able to consider the context and relationships
between words in entire sentences. They are capable of executing a wide range of language-related tasks (Yenduri et
al (2024)).
Central banks have been exploring AI applications, developing proof of concept solutions and, in some
cases, deploying AI applications.
AI can create opportunities by supporting human activities with the following benefits:
• Automation of business processes, optimising the use of resources and time, improving the
efficiency of repetitive or highly manual tasks and thereby increasing productivity.
• Swift analysis of large volumes of data enabling improvements in decision-making.
• Execution of processes that require the involvement of many people, allowing employees to
undertake other productive tasks.
2
OECD (2024b).
While there are significant benefits to the use of AI, its use also entails risks that must be identified,
analysed and mitigated. The following subsections outline the main risks associated with the adoption of
AI, provide examples of how to classify AI risks and propose methods for mitigating these risks.
The use of AI can introduce new risks and vulnerabilities and amplify existing risks. Like other emerging
technologies that access and process large volumes of data, AI carries a risk for users. The level of risk will
be determined by the access granted to information and data used by AI models and tools.
Below is a proposed classification of AI risks for central banks. Appendix 3 includes a summary
table of risks associated with the adoption of AI.
3
The General Data Protection Regulation (GDPR) is applicable to any company that processes the data of European Union (EU)
citizens or the personal data of subjects who are in the EU or the European Economic Area.
4
Poor data quality includes missing historical, unlabelled, biased, incomplete, noisy, untimely, inaccurate or inadaptable data.
5
Overfitting occurs when too many variables are included in the algorithm and it fits its training data too closely or even exactly,
resulting in a model that cannot make accurate predictions from any other data.
6
For instance, according to the literature on the ethical impact of data science, privacy is a concept that evolves alongside
technological and social changes. In this regard, people currently perceive privacy differently in public spaces, such as parks
and streets, compared with socially and legally designated private spaces, such as homes. Training technologies like ML, AI and
GAI with data and information from multiple perspectives or “dimensions” can reduce this risk. See Mulligan et al (2016).
Differentiating between risks and other impacts is important when adopting AI models and tools. Risks
are identifiable challenges that can be anticipated and managed. In contrast, other considerations refer to
unexpected outcomes that may emerge due to the complex and dynamic nature of AI technologies.
Therefore, a comprehensive approach to AI adoption in central banking must encompass a thorough
assessment of both risks, as well as detect and incorporate other considerations that are a natural part of
using AI.
Examples of such other considerations include:
• Unplanned uses – GAI can help identify new issues in central banks’ processes that need to be
explored.
• Job impact – not all employees benefit from AI in their roles, nor is AI necessarily a solution to
addressing all business challenges.
• New activities – staff responsible for information, processes and technological tools may need
to interact given the new activities introduced by AI.
• Complexity challenges – the complexity of training or deploying AI models can be
underestimated, as can the effort needed to integrate new tools with legacy systems, especially
when the information categories of the systems differ. This can lead to delays rather than
potential productivity increases. Additionally, some processes will require specially trained or
personalised models, which may require significant resources for maintenance. Acquiring and
retaining more specialised skills can often be more difficult than continuing existing processes
that rely on analysts.
• Project management – AI tools are currently in constant evolution, which has made it difficult
to precisely programme their implementation and the pressure to rapidly adopt AI tools may
lead to rushed implementations, increasing the probability of errors or vulnerabilities.
• Impact on technological infrastructure – the widespread use of AI tools within central banks
could affect their technological infrastructure performance. AI tools, particularly GAI, require a
vast amount of energy and a lot of computing and storage resources for training and responding
to user queries.
• Dependency on AI – if AI models produce accurate and useful outcomes, they could quickly
become key elements of central banks’ processes, generating a new dependency on this type of
tool and introducing new costs.
GPT – Derner and Batistič (2023) identified six main risks when using this technology: (i) generation of fraudulent
services; (ii) collection of harmful information; (iii) disclosure of private data; (iv) generation of malicious text; (v)
generation of malicious code; and (vi) production of offensive content. Computer systems are often designed without
explicit ethical standards and lack parameters to make value judgments.
Machine learning (ML) – as ML systems base decisions on algorithms, probabilities and data, the future actions
suggested by ML tools may lead to errors with impacts that are challenging to quantify (Babic et al (2021)). Due to
their evolutionary nature, ML systems can create discrepancies between the data they are initially trained on and the
data they subsequently process. The complexity of ML systems makes it challenging to identify the error or the logical
reason behind an unforeseen result when it occurs.
Generative artificial intelligence (GAI) – according to interviews with cyber security experts, GAI may enhance
organisations’ cyber defence capabilities worldwide but can also be used by attackers to develop more potent cyber
weapons that will be harder to defend against (Moody’s (2023)). Cyber security departments may lack sufficient data
to feed their GAI-based cyber defence systems as most organisations are reluctant to share technical details of cyber
attacks with other companies.
According to OECD (2023), risks or incidents resulting from the use of GAI grew exponentially between December
2022 and September 2024, reaching an unprecedented peak of 730 incidents in June 2024, as illustrated in Graph B.1.
Hyperlink BIS
Increase
Hyperlink BIS
in the number of registered risks or incidents resulting from the use of GAI Graph B.1
Hyperlink BIS
800
700
600
500
Number of incidents
400
300
200
100
0
mar-2021
dec- 2019
dec- 2020
dec- 2021
dec- 2022
dec- 2023
mar-2019
jun-2019
mar-2020
jun-2020
jun-2021
mar-2022
jun-2022
mar-2023
jun-2023
mar-2024
jun-2024
sep-2019
sep-2020
sep-2021
sep-2022
sep-2023
sep-2024
Considering the above, as well as the increasingly widespread use of AI worldwide, some countries are evaluating the
need to regulate its usage. For example, in the European Union, the Artificial Intelligence Act came into force on 1
August 2024 and will be fully applicable from 1 August 2026. Appendix 4 of this report provides an overview of current
AI frameworks (European Parliament (2024a)).
When defining an AI risk management strategy, the most important consideration is that the
implementation of AI models and tools is supported by a comprehensive risk management model. This
strategy should: define the AI risk profile; identify, evaluate and select AI projects; leverage and
adapt governance management models that are already in place; and protect information through
the full life cycle.
1. Define the AI risk profile. Determine the central bank’s appetite for AI-related risks, taking into
account the intended strategic goals and outcomes for AI use. This can then guide risk
management decisions on AI development, management and use. This also helps to comply with
regulations and align usage with the central bank’s resilience objectives (CGRM (2023)). By its
nature the output from GAI is not precise so it may be necessary to accept certain risks in favour
of innovation with AI. Central banks should carefully consider whether the output of AI or the
means to manage the risks around AI are aligned with its risk appetite for innovation.
At the initial stages of AI adoption, it is often safer to use AI only in internal processes that are
not critical for central banking functions until the risks associated with such technology are
properly assessed and controlled.
2. Establish a process to identify, evaluate and select feasible AI projects. Establish a
multidisciplinary team responsible for selecting feasible AI projects that align with the strategic
and innovation objectives of the central bank. The team’s main objective is to evaluate the
suitability of adopting certain AI models and tools, and verify whether the proposed models and
tools align with the central bank’s risk appetite. This process should take place before other risk
management activities.
Such a process starts with the identification and analysis of AI use cases, initiatives and projects,
their classification by risk level depending on the sensitivity of the information used to train them
and the information accessed and generated by the model (Graph 1). The second step is to
determine the appropriate controls, striking a balance between risks and controls that is
determined by the risk profile of the institution. One factor to consider here is risk introduced by
the use of third-party information or solutions. Once it has been decided that a use case, initiative
or project is feasible to implement, the new controls should be designed, implemented and
tested, and the technological infrastructure adapted if needed. Finally, the solution should be
integrated into the risk mitigation mechanisms that are already in place and must be monitored
during its operation.
These approaches allow for more efficient allocation of resources and greater precision in terms
of managing vulnerabilities. Box C explains an approach that central banks could follow to start
addressing AI risks according to AI models’ intended use.
Box C
Classifying artificial intelligence (AI) risks based on use cases, initiatives and projects allows more flexibility to
focus on the level of risk of each use case (Amazon AWS (2023)). Central banks may employ third-party AI services or
applications for non-sensitive daily use. Such arrangements warrant specific considerations for each use case, initiative
or project in particular.
• Consumer application. The organisation consumes a public third-party AI service, either at no cost or paid.
The organisation does not own or know the training data or the model. Usage is through application
programming interfaces (APIs) or direct interaction with the application, according to the terms of service
of the provider.
Enterprise application. The organisation uses a third-party enterprise application that has AI features
embedded within, and a business relationship is established between the organisation and the vendor.
The risks associated with the daily use of AI tools, such as consumer or enterprise applications, are generally
lower compared with the risks tied to newly developed AI projects. Daily use scenarios involve established third-party
services in which the organisation interacts with public or enterprise applications under defined terms of service. These
scenarios, while not free from risk, benefit from the inherent stability and predictability of using pre-existing AI models
and services. Organisations must remain vigilant about compliance and data privacy, but the risks are typically well
managed through robust vendor relationships and standardised security protocols.
3. Leverage and adapt governance management models already in place. Using and adapting
existing governance management models is important for both completeness and consistency.
For example, the three lines of defence model, which separates management and control
responsibilities into three distinct layers, should be adapted and used to clarify roles and
responsibilities in managing AI risks.
▪ First line roles: these are directly aligned with delivering products and/or services to internal
and external stakeholders, including support functions. They own and manage related risks
with AI outputs.
Function Considerations
– Identify AI use cases, initiatives and projects that are aligned with the central bank’s strategy or add
innovation value. Identify use cases that are valid and reliable with clear benefits.
– Identify and assess risks. Conduct risk assessments (identification, analysis and evaluation) that
consider AI-specific threats and vulnerabilities, and determine whether use cases match the risk
appetite.
– AI-specific technical controls. Implement new controls, based on strengths, weaknesses, opportunities
First line and threats (SWOT) analysis, such as AI model robustness testing, training data and outcome validation,
and bias detection techniques.
– Monitoring mechanisms. Set up continuous monitoring mechanisms to detect performance and
security issues. Monitor data drift in AI models where changes to the statistical distribution of the
underlying data can potentially cause a decline in model performance.
– Deliver specialised training. Ensure technical and risk management skills are regularly refreshed.
– Alignment of AI use with risk appetite and risk profile. Support the first line to determine if the use
case, initiative or project is aligned with the central bank’s risk appetite and profile.
– Risk methodology and prioritisation. Develop the institutional risk management methodology,
coordinate the execution of risk assessments made by the first line and prioritise risks based on the AI
risk profile.
– AI-specific policies and guidelines. Update existing policies or develop new policies and guidelines to
Second line address AI-specific risks and promote the adoption of AI while ensuring ethical and secure use. Update
data governance policies if necessary.
– Compliance and ethics. Supervise compliance with regulations and adherence to ethical principles.
Analyse legal framework according to the finance sector and central bank contexts.
– Specialised training and awareness. Design and provide AI-specific risk training materials and instil
awareness.
– Technical and ethical audits. Perform audits that evaluate the security controls as well as the fairness,
transparency and ethics of AI models.
Third line
– Continuous review. Provide ongoing reviews and recommendations for improving AI controls and
policies, adapting to fast-evolving technology and associated risks.
Source: authors’ elaboration.
Information security and cyber security programs that have been developed based on standards such as
ISO/IEC 27001:2022 – Information security management9 or the National Institute of Standards and
Technology (NIST) Cybersecurity framework10 will continue to be adequate to manage the risks posed by
AI models. These frameworks are flexible and scalable, so they can be adapted to different industries and
technologies. Nevertheless, they need to be modified to capture the specific AI-related risks described in
Section 2. A key aspect of AI models is that systems or tools that support these models should be valid,
safe and reliable. In this way, cyber security is crucial for constructing reliable AI systems; by directly
contributing to the confidentiality of sensitive information, mitigation of integrity and availability risks, as
well as ensuring adequate access management.
Beyond preventing attacks and mitigating vulnerabilities, several standards focus on data
governance and ensure that the results from AI are explainable and interpretable:
• NIST Artificial intelligence risk management framework (NIST AI RMF);11
7
See principle 10 of BCBS (2021a).
8
ISO (2018).
9
ISO and IEC (2022a).
10
NIST (2024).
11
NIST (2023).
The complexity of GAI models requires special attention on trying to understand the outputs of those
models, including potential biases, limitations and robustness, as well as considering the greater human
supervision they will require. Some specific controls and practices examples to address GAI risks are listed
below.
12
ISO and IEC (2023a).
13
ISO and IEC (2022b).
14
NIST (2018).
15
NIST (2023) defines the following key characteristics of trustworthy AI systems: valid and reliable, safe, secure and resilient,
explainable and interpretable, privacy enhanced, fair (with harmful bias managed), and accountable and transparent.
4 Governance
The inherent complexity and uncertainty associated with AI systems makes their adoption challenging.
Effective governance16 mechanisms can help balance the risks and rewards of AI adoption in a consistent
risk-based manner. This is important because AI may affect business processes and decision-making across
the institution. AI governance is important not only for complying with national and international
strategies, laws or regulations (including multilateral agreements between countries), but also for ensuring
the alignment of any AI initiative with the organisation’s strategy. Effective governance mechanisms should
support efficiency and innovation in the organisation while effectively identifying and balancing associated
risks. That said, AI governance does not have to start from scratch but can be built on policies, procedures
and management tools that are already in place.
16
AI governance refers to the set of policies, rules, frameworks, principles and/or standards that guide organisations in their
adoption or development of secure, responsible and ethical use of AI.
17
Eg the International Organization for Standardization (ISO), the Organisation for Economic Co-operation and Development
(OECD) and the National Institute of Standards and Technology (NIST) of the US Department of Commerce.
The CGRM has identified the following actions as useful for governance of the adoption of AI in central
banks:
1. Establish an interdisciplinary AI committee
Prior to establishing specific AI governance, central banks should establish a dedicated AI
committee, which will serve as an oversight body to help guide the implementation of
governance requirements. This oversight body should be sufficiently interdisciplinary, given the
General proposal for governance and risk management associated with the use
of AI models by central banks Graph 4
Central banks are increasingly adopting AI to improve data quality, enhance operations and support
decision making. AI provides powerful tools for addressing complex challenges in areas such as data
analysis, risk assessment, forecasting, operations, and customer and corporate services.
But the use of AI tools can also pose new risks and have the potential to exacerbate existing risks
in central banks. Such risks are best managed holistically at the design, implementation and operations
phases. Information security risks deserve special mention, given the criticality and sensitivity of the
information handled by central banks. In general, central banks could start adopting AI in non-critical
processes where risks can be better controlled.
A safe AI adoption could cover the following domains: (i) governance; (ii) legal and compliance;
(iii) information security and privacy; (iv) cyber security; (v) third-party risk management; (vi) business
continuity; and (viii) other operational risks associated with the level of digitalisation and exposure of the
organisation.
Instead of reinventing the wheel, such a framework can build on existing risk management
frameworks and governance schemes. The task force did not identify any entirely new approaches to
managing AI risks, but recommends adapting existing mechanisms to the specific risks posed by AI tools.
Central banks can use existing risk management mechanisms such as the three lines of defence model
with specific adjustments.
We recommend that the AI risk profile is discussed and defined by upper management before
adopting AI models as this would help to allocate resources and set priorities. Benefits need to be weighed
against risks posed to information and core functions and be aligned with the risk appetite and security
capabilities of the organisation.
Given the transformative potential of AI technology, both in terms of its business impact and
possible externalities to society at large, developing a governance framework for AI adoption is of high
importance for central banks interested in the use of AI. This involves revising policies that cover various
procedures for the organisation’s governance and operations, such as systems and risk management,
compliance and data maintenance, and transparency and communication with internal and external
stakeholders.
Good practices laid out in international standards can serve as a starting point. These practices
include:
• systems and risk management – updating systems to integrate AI while ensuring robust risk
management practices;
• compliance and data maintenance – ensuring AI systems comply with existing regulations and
maintain high standards of data integrity and privacy; and
• transparency and communication – enhancing transparency in AI decision-making processes
and effectively communicating these processes to internal and external audiences.
By leveraging these international standards and best practices, central banks can effectively
incorporate AI into their governance frameworks – ensuring the secure, responsible and ethical use of AI
technology.
Accornero, M and G Boscariol (2022): “Machine learning for anomaly detection in datasets with categorical
variables and skewed distributions” in “Machine learning in central banking”, IFC Bulletin, no 57,
November, presentation given at an Irving Fisher Committee on Central Bank Statistics and Bank of Italy
workshop, 19–22 October 2021, www.bis.org/ifc/publ/ifcb57_05.pdf.
Amadxarif, Z, J Brookes, N Garbarino, R Patel and E Walczak (2021): “The language of rules: textual
complexity in banking reforms” Bank of England Staff Working Paper, no 834, November,
www.bankofengland.co.uk/-/media/boe/files/working-paper/2019/the-language-of-rules-textual-
complexity-in-banking-reforms.pdf.
Amazon AWS (2023): “Securing generative AI: an introduction to the generative AI security scoping matrix”,
AWS Security Blog, 19 October, aws.amazon.com/es/blogs/security/securing-generative-ai-an-
introduction-to-the-generative-ai-security-scoping-matrix/.
——— (2024): “What’s the difference between deep learning and neural networks?”,
aws.amazon.com/compare/the-difference-between-deep-learning-and-neural-networks/?nc2=h_mo-
lang.
Araujo, D, G Bruno, J Marcucci, R Schmidt and B Tissot (2022): “Machine learning applications in central
banking: an overview” in “Machine learning in central banking”, IFC Bulletin, no 57, November,
www.bis.org/ifc/publ/ifcb57_01_rh.pdf.
Babic, B, I Cohen, T Evgeniou and S Gerke (2021): “When machine learning goes off the rails”, Harvard
Business Review, January–February 2021, hbr.org/2021/01/when-machine-learning-goes-off-the-rails.
Baker-Brunnbauer, J (2023): Trustworthy artificial intelligence implementation: introduction to the TAII
framework, Springer, link.springer.com/book/10.1007/978-3-031-18275-4.
Bank for International Settlements (BIS) (2024): “Artificial intelligence and the economy: implications for
central banks”, Annual Economic Report 2024, June, Chapter III, www.bis.org/publ/arpdf/ar2024e3.pdf.
Bank for International Settlements Innovation Hub (BISIH) (2023): Project Aurora: the power of data,
technology and collaboration to combat money laundering across institutions and borders, May,
www.bis.org/publ/othp66.htm.
——— (2024): Project Raven: using AI to assess financial system’s cyber security and resilience, April,
www.bis.org/about/bisih/topics/cyber_security/raven.htm.
Basel Committee on Banking Supervision (BCBS) (2021a): Revisions to the principles for the sound
management of operational risk, March, www.bis.org/bcbs/publ/d515.htm.
——— (2021b): Principles for operational resilience, March, www.bis.org/bcbs/publ/d516.htm.
Benford, J (2024): “Trusted AI: ethical, safe and effective application of artificial intelligence at the Bank of
England”, speech given at the Central Bank AI Conference, Bank of England, September,
www.bankofengland.co.uk/speech/2024/september/james-benford-speech-at-the-central-bank-ai-
inaugural-conference.
Bluwstein, K, M Buckmann, A Joseph, M Kang, S Kapadia and Ö Simsek (2020): “Credit growth, the yield
curve and financial crisis prediction: evidence from a machine learning approach”, Bank of England Staff
Working Paper, no 848, January, www.bankofengland.co.uk/-/media/boe/files/working-
paper/2020/credit-growth-the-yield-curve-and-financial-crisis-prediction-evidence-from-a-machine-
learning.pdf.
Artificial intelligence risk management framework (AI RMF 1.0) (NIST (2023))
“An AI system is an engineered or machine-based system that can, for a given set of objectives, generate
outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI
systems are designed to operate with varying levels of autonomy.”
Recommendation of the Council on Artificial Intelligence (OECD (2024a))
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it
receives, how to generate outputs such as predictions, content, recommendations, or decisions that can
influence physical or virtual environments. Different AI systems vary in their levels of autonomy and
adaptiveness after deployment.”
Trustworthy artificial intelligence implementation: introduction to the TAII framework (Baker-
Brunnbauer (2023))
“AI systems are software (and possibly also hardware) systems designed by humans that, given a complex
goal, act in the physical or digital dimension by perceiving their environment through data acquisition,
interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the
information, derived from this data and deciding the best actions to take to achieve the given goal.”
Model artificial intelligence governance framework (IMDA and PDPC (2020))
“AI refers to a set of technologies that seek to simulate human traits such as knowledge, reasoning,
problem solving, perception, learning and planning, and, depending on the AI model, produce an output
or decision (such as a prediction, recommendation, and/or classification).”
Role of artificial intelligence (AI) in central banking: implications for COMESA member central
banks (Njoroge (2024))
“Artificial intelligence (AI) uses computing to create intelligence artificially and is described as the ability
of machines to imitate human intelligence. It entails a collection of tools that learn with given data and
understand patterns and interactions between series and values.”
The use cases described in Section 1.1 were identified from both public sources and from responses to a
questionnaire from members of the Consultative Group on Risk Management (CGRM) task force. Graph
2.1 below displays the distribution of the number of use cases.
Hyperlink BIS
AI use cases among CGRM task force members1
Hyperlink BIS
Graph 2.1
a ana s s ana s s
s na s
s a ass s an
n as n
as n n n
n a n as n
n an ns
s a an an
n s
aa a
n an ns
a
na s s
b n a n a b s a
s s n s s n an ns a nn ss s n
1
This graph includes more use case categories than those described in Section 1.1.
Applies to
Applies
Type of risk Description new
to AI
technologies
The AI Act classifies models according to its risks, categorising them into four different levels
(Graph 4.1):20
The unacceptable risk category establishes that systems considered a clear threat to the safety,
livelihoods and rights of people will be banned. This includes social scoring by governments, devices using
voice assistance that encourage dangerous behaviour and manipulative AI.
The second category, high risk, refers to any AI system which: (i) constitutes a certain type of
product (eg medical devices, industrial machinery, toys, aircraft or cars); (ii) is a safety component of a
certain type of product (eg components for rail infrastructure, lifts or appliances burning gaseous fuels);
or (iii) AI systems that are high risk by their nature eg biometrics, critical infrastructure, exploiting
vulnerabilities, education, employment, access to essential services both public and private, law
enforcement, immigration, the administration of justice and democratic processes.
18
Future of Life Institute (2024) and European Parliament (2024a).
19
European Parliament (2024b).
20
Pinsent and Masons (2024) and Future of Life Institute (2024).
The Institute of Electrical and Electronics Engineers (IEEE) Global Initiative on Ethics of
Autonomous and Intelligent Systems 21
This framework consists of eight general principles applicable to the creation and operation of all types of
autonomous and intelligent systems (AIS), regardless of whether they are physical robots or software
systems in real, virtual or mixed-reality environments.
21
IEEE (2023).
Systems should guard against all potential misuses and risks during
7. Awareness of misuse operation.
These principles have resulted in eight recommendations that provide guidelines for achieving
digital transition within the ethical framework of the declaration. These include implementing audits and
certifications, independent controlling organisations, ethics education for developing stakeholders,
empowerment of the user and ecological sustainability.
The AI subject to evaluation must not pose risks to human life, health, property or
1. Safe the environment.22
The AI under evaluation and the ecosystems in which it operates must be resilient
and able to tolerate adverse and unexpected events in its environment, maintain its
2. Secure and resilient functions and structure despite internal and external challenges and, if necessary,
degrade safely.
AI should minimise bias, and thus the damage caused to individuals, groups,
communities, organisations and society as a whole. It is therefore important to
5. Fair – with harmful bias consider multiple human and social values, such as transparency and fairness in this
management process. Methodologies have been developed to examine the role of human values
in technological contexts, in line with the theoretical currents known as “values in
design” and “value-sensitive design”.
22
ISO and IEC (2022c).
23
Hirsch et al (2017)
The dimension people and planet considers different criteria such as users of the system,
impacted stakeholders, human rights and democratic values, well-being, society and the environment
including sustainability and stewardship.
Economic context refers to industrial sectors. It describes the business function and business
model, the level of criticality regarding the extent to which a disruption impacts systems functions or
essential services, and is also related to scalability and maturity (breadth of deployment).
The third criterion, data and input, refers to the detection and collection of data either by
humans, automated sensors or both, the provenance of the data and input, as well as their nature
(dynamic, static, updated or real time). Further, it covers proprietary, public or personal rights and the
identifiability of personal data. ISO/IEC 19441 (2017)25 distinguishes various categories – or “states” – of
data identifiability.
The AI model dimension refers to model characteristics like AI model type, model building either
from machine or human knowledge, model evolution, machine learning, central or federated machine
learning and model inference about transparency and explainability.
Finally, the task and outputs criterion describes the tasks that the system performs, the level of
autonomy and the role that humans play. It also considers core application areas such as human language,
computer vision, automation/optimisation or robotics.
24
OECD (2022).
25
ISO and IEC (2017)
26
OECD (2022).
The board must establish a privacy policy, assign roles and responsibilities,
2. Leadership and commitment and ensure the integration of AI systems requirements into the
organisation’s current processes.
The standard outlines the processes and controls necessary to meet the
requirements of the AI systems. This includes data protection by design
5. Operation and by default, conducting data protection impact assessments, and
managing data breaches and incidents.
27
ISO and IEC (2023b).
28
ISO and IEC (2023a).
The standard provides guidance on adapting the current risk management structure to
cover AI systems, like ensuring risk integration to the business processes, maintaining
2. Risk and AI systems
the commitment of leadership towards the responsible adoption of AI, properly
management framework
mapping external and internal contexts, defining roles, accountability and resource
allocation.
Given the nature and amount of data required by AI systems, organisations should
5. Data governance and
assess and mitigate risks related to data governance and privacy, particularly with the
privacy risk assessment
use of large data sets for training AI systems.
29
ISO and IEC (2022b).
Given the possibility of a great impact on the organisation’s activities, this standard
1. Keeping governance provides guidance on assessing the adequacy of the current governance structure in
the light of AI adoption.
Roles and responsibilities of the governing body (eg board of directors) in overseeing
AI initiatives are highlighted in this standard. It emphasises the need to develop/review
2. Responsibilities of the
governance structures, decision-making authority and accountability frameworks in the
governing body
face of greater technological dependency and the need for greater transparency and
explainability brought by AI systems.
The standard provides guidelines on how the governing body should define the
organisation’s AI strategy, ensuring that AI investments align with long-term goals,
3. AI strategy and provide value and are in line with its risk appetite. It also suggests attention to new
investment needs related to the technology, like improvement compliance supervision and control,
assessment of the impact of usage of AI across the organisation, and special care with
legal requirements and the consequences of deploying AI.
4. Policies review and Governing body shall ensure that proper policies, responsibility chain and human
decision-making supervision are in place for the controlled use of AI, since automated decision-making
governance delivered by AI systems does not change the responsibility of the governing body.
The standard recommends robust data governance for responsible and effective AI
5. Data governance usage. Data collection, treatment and storage processes shall be enhanced, in order to
ensure quality in data processing and output. Bias analysis shall also be performed.
As an AI system does not understand context (moral values and common sense etc) like
6. Culture and values humans, it is important that the governing body is explicit about the organisation’s
culture and values and is able to monitor, and when necessary correct, AI’s behaviour.
The governing body should seek assurances that management configures and
maintains any AI system used by the organisation to meet compliance obligations and
avoid compliance violations, such as pricing mechanisms that violate antitrust legal
7. Compliance requirements or the use of data for training that violates civil rights or is discriminatory.
Compliance management shall also be enhanced to cover new needs, like the
sophistication of AI systems (new controls may be implemented) and human
reassessment of decisions made by AI systems.
As AI adoption provides risks and opportunities, it is paramount that the current risk
management process review examines whether the risks involved are fully understood
8. Risk management and managed, especially in decision-making processes, data usage, culture and values
development, and compliance. If so, the governing body must be aware of the
acceptability of those risks with regard to its stated risk appetite.
The standard reminds the reader that the organisation’s objectives and assets shall be
carefully considered in an AI adoption scenario, like accountability, duty of care, physical
9. Objectives
safety, security and privacy, transparency and data itself (whose protection and integrity
may be considered an organisational objective).
To sum up, these three ISO/IEC standards can be seen as complementary, each addressing a
different layer of responsibility and scope within the life cycle of AI adoption, from operational
management and risk assessment to high-level governance and strategic oversight.
Co-chairs
Bank of Mexico Alejandro De Los Santos
Maryam Haghighi
Aldo Hernández
Julieta Carmona
Gloria Guadarrama
Alfonso Murillo
Secretariat