BIS-how AI Is Transforming Finance
BIS-how AI Is Transforming Finance
BIS-how AI Is Transforming Finance
No 1194
Intelligent financial
system: how AI is
transforming finance
by Iñaki Aldasoro, Leonardo Gambacorta, Anton Korinek,
Vatsala Shreeti and Merlin Stein
June 2024
© Bank for International Settlements 2024. All rights reserved. Brief excerpts may be
reproduced or translated provided the source is stated.
V Shreeti M Stein
BIS Univ. of Oxford
Abstract
We thank seminar participants at ECB and GovAI seminars and Douglas Araujo,
Fernando Perez-Cruz, Fabiana Sabatini, David Stankiewicz and Andrew Sutton for help-
ful comments and suggestions, and Ellen Yang for research assistance. Contact: Al-
dasoro ([email protected]), Gambacorta ([email protected]), Korinek (an-
[email protected]), Shreeti ([email protected]) and Stein ([email protected]). The
views expressed here are those of the authors only and not necessarily those of the Bank for In-
ternational Settlements.
1 Introduction
Like the brain of a living organism, the financial system processes vast amounts
of dispersed information and aggregates it into price signals that facilitate the co-
ordination of all the players of the economy and guide the allocation of scarce
resources. It not only enables the efficient flow of capital but also contributes to the
economic system’s overall health by managing risk, maintaining liquidity and sup-
porting stability. Financial markets and intermediaries, when they function well,
are a fundamental source of progress and welfare. Conversely, the role of financial
policy and regulation is to correct instances of “brain malfunction” and to instead
harness the intelligence of the financial system to enhance social welfare.
Processing all the necessary information and coordinating the actions of numer-
ous participants in the economy is a notably complex problem. As the brain of the
economy, financial markets and intermediaries have played this role for a long time.
At any given point in time, their capacity to do so was shaped in large part by the
information processing technology available. For example, over the years, techno-
logical advancements like telecommunications and the internet have continuously
enhanced the capacity of financial markets to solve economic problems: a brain that
can process more information more efficiently is better suited to solving increasingly
complex tasks. It is then no surprise that financial markets have been a magnet for
both cutting-edge information processing technology and for sophisticated human
talent. Most recently, the information processing capabilities of the financial system
have been enhanced by fast-paced advancements in artificial intelligence (AI).
In this paper, we describe the evolution of the financial sector through the prism
of advancements in information processing, with a special focus on AI. We evaluate
the opportunities and challenges created for the financial sector from different gen-
erations of AI, including machine learning (ML), generative AI (GenAI), and the
emergence of AI agents. We also provide a discussion of the effects of AI on finan-
cial stability and the risk of real sector disruptions caused by AI. In light of these
insights and increasing AI adoption we discuss the implications for the regulation
of the financial sector.
Over the course of human history, the trajectory of methods for information
processing – of which AI is part – has been closely linked with developments in
commerce, trade and finance. In section 2 we describe this trajectory in detail.
History is replete with examples of the financial system either sparking a change
1
in the arc of technological development, or itself being an early adopter of technol-
ogy. From the abacus of the ancient Sumerians to double-entry book-keeping, the
evolution of information processing technology and finance has often gone hand in
hand. Over the last century, the most significant advance in the realm of infor-
mation processing was the invention of computers in the 1950s. This allowed for
the automation of many analytic and accounting functions that were very useful
for the functioning of the financial system. As computational power increased over
time, more sophisticated technologies emerged that allowed for the processing of
non-traditional data, like machine learning models and, most recently, GenAI.
At the same time, as technology has grown more complex, so have the risks
and challenges for the financial system. Challenges include lack of transparency of
complex machine learning models, dependence on large volumes of data, threats
to consumer privacy, cybersecurity and algorithmic bias. GenAI has exacerbated
some of these challenges and increased the dependence on data and computing
power. There are additional concerns about market concentration and competition
as GenAI models are produced by a few dominant companies.
There are other, potentially more serious risks to financial stability associated
with the use of AI in the financial system. Even early rule-based computer trading
systems were associated with cascade effects and herding, for example, in the 1987
US stock market crash. With machine learning models, the risks of uniformity,
model herding and network connectedness have only compounded. Additionally,
from the point of view of regulators, the use of advanced AI techniques poses a
further challenge: the proliferation of complex interactions and the inherent lack of
explainability makes it difficult to spot market manipulation or financial stability
risks in time. With GenAI, co-pilots and robo-advising can mean that decisions
2
become more homogeneous, potentially adding to systemic risk.
Finally, in section 6 we conclude and discuss some avenues for further research.
The evolution of the financial system has gone hand in hand with the evolution
of information processing technology. To understand the implications of AI for fi-
nance it is therefore helpful to examine the historical development of computational
methods in tandem with concurrent developments in money and finance. Advances
in computational hardware and software have enabled the evolution of advanced
analytics, machine learning, and generative AI. At each technological turn in the
past, the financial system has either been a catalyser of change or an early adopter
of technology.
The origins of computation can be traced back to ancient Sumerians and the
abacus, the first known computing device. This was one of the earliest instances
of numerical systems being crafted to address financial needs. Laws have also been
driven by the changing needs of commerce and finance: the Code of Hammurabi,
one of the earliest legal edicts, laid out laws to govern financial transactions as
early as the 18th century BCE. Similarly, medieval Italian city-states pioneered
double-entry book-keeping, a seminal development in accounting that opened the
3
door to an unprecedented expansion of commerce and finance. In fact, double-
entry book-keeping underpins regulation, taxation, governance, contract law, and
financial regulation to this day.
1
The term artificial intelligence was first coined by the mathematician John McCarthy in a
now mythical workshop at Dartmouth College in 1956.
2
GOFAI stands for “Good Old Fashioned AI”, a term coined by philosopher John Haugeland
to refer to classic symbolic AI, based on the idea of encoding human knowledge and reasoning
processes into a set of rules and symbols (Haugeland, 1985).
4
Machine Learning The next wave of progress came with machine learning (ML),
a sub-field of AI (Figure 1). ML algorithms can autonomously learn and perform
tasks, for example classification and prediction, without explicitly spelling out the
underlying rules. Like earlier advances in information processing, ML was quick to
be adopted in finance, even though in the early days, its usefulness was limited by
computing power. Early examples of ML relied on large quantities of structured
and labelled data.3
Figure 1: Decoding AI
Source: Authors’ illustration.
The most advanced ML systems are based on deep neural networks, which are
algorithms that operate in a manner inspired by the human brain.4 Deep neural
networks are universal function approximators that can learn systematic relation-
ships in any set of training data, including increasingly in complex, unstructured
3
Structured data refers to organised, quantitative information that is stored in relational
databases and is easily searchable. It typically includes well-organised text and numeric infor-
mation. Unstructured data is information that is not organised based on pre-defined models. It
can include information in text and numeric formats but also audio and video. Some examples
of unstructured data include text files like emails and presentations, social media content, sensor
data, satellite images, digital audio, video, etc.
4
In such systems, the input layer of artificial neurons receives information from the environ-
ment, and the output layer communicates the response; between these layers are “hidden” layers
(hence the “deep” in deep learning) where most of the information processing takes place through
weighted connections to previous layers.
5
datasets (Hornik et al. (1989), Goodfellow et al. (2016) Broby (2022), Huang et al.
(2020)). These developments enabled financial institutions to analyse terabytes of
signals including news streams and social media sentiment. At an aggregate level,
this led to increasingly fast-paced and dynamic markets, with optimised pricing and
valuation. However, as these models dynamically adapt to new data, often without
human intervention, they are somewhat opaque in their decision-making processes
(Gensler and Bailey (2020), Cao (2020)).
Generative AI For the past 15 years, i.e., since the beginning of the deep-learning
era, the computing power used for training the most cutting-edge AI models has
doubled every six months – much faster than Moore’s law would suggest (Figure
2). These advances have given rise to rapid progress in artificial intelligence and are
behind the advent of the recent generation of GenAI systems, which are capable of
generating data. The most important type of GenAI are Large Language Models
(LLMs), best exemplified by systems like ChatGPT, that specialise in processing
and generating human language.
6
many interpret as a rudimentary form of understanding. Drawing from this simple
but powerful principle, LLM-based chatbots can generate text based on a starting
point or a “prompt”. A leading explanation for the capacity of modern LLMs to
produce reasonable content across a wide range of domains is that the training
process leads such models to generate an internal representation of the world (or
“world model”) based on which they can respond to a wide variety of prompts (Li
et al., 2023).
The use cases of LLMs have blossomed across many sectors. LLMs can generate,
analyse and categorise text, edit and summarise, code, translate, provide customer
service, and generate synthetic data. In the financial sector, they can be used for
robo-advising, fraud detection, back-end processing, enhancing end-customer expe-
rience, and internal software and code development and harmonisation. Regulators
around the world are also exploring applications of GenAI and LLMs in the areas
of regulatory and supervisory technologies (Cao, 2022).5
AI Agents The next frontier on which leading AI labs are currently working are
AI Agents, i.e., AI systems that build on advanced LLMs such as GPT-4 or Claude 3
and are endowed with planning capabilities, long-term memory and, typically, access
to external tools such as the ability to execute computer code, use the internet, or
perform market trades.6 Autonomous trading agents have been deployed in specific
5
To be sure, current generative AI systems also have clear limitations. For example, they have
been shown to fail at elementary reasoning tasks (Berglund et al., 2023; Perez-Cruz and Shin,
2024).
6
The term “AI Agents” reflects AI systems that increasingly take on agency of their own.
Chan et al. (2024) define agency as “the degree to which an AI system acts directly in the world
to achieve long-horizon goals, with little human intervention or specification of how to do so. An
(AI) agent is a system with a relatively high degree of agency; we consider systems that mainly
7
parts of financial markets for a long time, for example, in high-frequency trading.
What distinguishes the emerging generation of AI agents is that they have the
intelligence and planning capabilities of cutting-edge LLMs. They can, for example,
autonomously analyze data, write code to create other agents, trial-run it, update it
as they see fit, and so on.7 AI agents thus have the potential to revolutionise many
different functions of financial institutions–just like autonomous trading agents have
already transformed trading in financial markets.
Artificial General Intelligence For several of the leading AI labs, the ultimate
goal is the development of Artificial General Intelligence (AGI), which is defined as
AI systems that can essentially perform all cognitive tasks that humans can perform
(Morris et al., 2024). Unlike current narrow AI systems, which are designed to
perform specific tasks with a pre-defined range of abilities, AGI would be capable of
reasoning, problem-solving, abstract thinking across a wide variety of domains, and
transferring knowledge and skills across different fields, just like humans. Relatedly,
some AI researchers and AI lab leaders speak of Transformative AI (TAI), which is
defined as AI that is sufficiently capable so as to radically transform the way our
economy and our society operate, for example, because they can autonomously push
forward scientific progress, including AI progress, at a pace that is much faster than
what humans are used to, or because they significantly speed up economic growth
(Suleyman and Bhaskar, 2023). There is an active debate on whether and how fast
concepts such as AGI or TAI may be reached, with strong views on both sides of
the debate. As economists, we view it as prudent to give some credence to a wide
range of potential scenarios for the future (Korinek, 2023b).
3 AI transforming finance
predict without acting in the world, such as image classifiers, to have relatively low degrees of
agency”.
7
Prototypes of such AI agents currently exist mainly in the realm of coding, where Devin,
an autonomous coding agent developed by startup Cognition Labs, or SWE-agent, developed by
researchers at Princeton (Yang et al., 2024), can autonomously take on entire software projects.
8
processing technology to do so. Table 1 summarises the impact of the technologies
we described earlier, from traditional analytics to AI Agents, on four key financial
functions: financial intermediation, insurance, asset management and payments.
Other challenges relate to cybersecurity and the risk of adversarial attacks. Shar-
ing data with third-party vendors (e.g., AI service providers) can expose sensitive
information. Moreover, IT systems can be targets of attacks. This requires the im-
plementation of robust encryption and authentication protocols whenever data and
algorithms are shared. All of these challenges carry over to the context of modern
AI.
9
economics (Athey (2018)).
Machine learning has a range of use cases across the four economic functions we
consider. In financial intermediation, the use of ML models can reduce credit un-
derwriting costs and expand access to credit for those previously excluded, although
few financial institutions have taken advantage of the full range of these opportu-
nities. ML models can also streamline client onboarding and claims processing in
several industries, particularly in insurance. Across industries, but especially in
insurance and payments, ML models are used to detect fraud and identify security
vulnerabilities.
The opportunities created by ML models also come with risks and challenges.
The flip side of flexible, highly non-linear machine learning models is that they
often function like black boxes. The decision process of these models – for example
whether or not to grant credit – can be opaque and hard to decipher.
Generative AI mostly in the form of LLMs, is part of the new frontier and
comes with its own set of opportunities. Two key aspects of GenAI are particularly
useful for the financial sector. First, whereas earlier computational advances have
made the processing of traditional financial data more efficient, GenAI allows for
increased legibility of new types of (often unstructured) data, which can enhance
risk analysis, credit scoring, prediction and asset management. Second, GenAI
provides machines the ability to converse like humans, which can improve back-end
processing, customer support, robo-advising and regulatory compliance. Moreover,
it also allows for the the automation of tasks that were until recently considered
uniquely human, for example, advising customers and persuading them to buy
financial products and services (Matz et al., 2024).
10
Table 1: Opportunities, challenges and financial stability implications of computational advances
Rigid, requires human supervision, small number of parameters, Zero-sum arms races Technical vulnerabilities
Traditional Challenges flash crashes
threats to consumer privacy, emergence of data silos
analytics
Financial Stability Herding, cascade effects and flash crashes, such as the US stock market crash of 1987
Credit risk analysis, lower underwriting Insurance risk analysis, lower Analysis of new data sources, New liquidity management tools,
Opportunities
costs, financial inclusion processing costs, fraud detection high frequency trading fraud detection and AML
Financial Stability Herding, network interconnectedness, lack of explainability, single point of failure, concentrated dependence on third party providers
Generative Challenges Hallucinations, increased market concentration, consumer privacy concerns, algorithmic collusion
AI
Financial Stability Herding, uniformity, incorrect decisions based on alternative data, macroeconomic effects of potential labour displacement
Automated design, marketing and sale of new financial Increase in speed of information processing Faster payment flows, fraud prevention
Opportunities
products without human intervention
New risks to consumer protection, cybersecurity, Cybersecurity, fraud, unforeseen risk Sudden liquidity crises, fraud
AI Challenges
potential overreliance, fraud and unforeseen risks concentration with AI agent interactions with deception and unforeseen risks
Agents
Financial Stability Misalignment risks, inherent unsuitability of AI agents for aspects of macroprudential policies
The financial industry has already started adopting GenAI. OECD (2023) pro-
vides several recent examples: Bloomberg recently launched a financial assistant
based on a finance specific LLM, and the investment banking division of Goldman
Sachs uses LLMs to provide coding support for in-house software development. Sev-
eral other companies use GenAI to provide financial advice to customers and help
with expense management, as well as through co-pilot applications.
Despite these potential benefits and growing adoption, LLMs also create new
risks for the financial sector. They are prone to “hallucinations”, i.e., to generate
false information as if it were true. This can be especially problematic for customer-
facing applications.8 Moreover, as algorithms become more standardised and are
uniformly used, the risk of herding behaviour and procyclicality grows.
There are also concerns about market concentration and competition. GenAI
is fed by vast amounts of data and is very hungry for computing power, and this
leads to a risk that it will be provided by a few, dominant companies (Korinek
and Vipra, 2024). Notably, big tech companies with deep pockets and unparalleled
access to compute and data are well positioned to reinforce their competitive ad-
vantage in new markets. Regulators, especially competition authorities, have also
started highlighting intentional and unintentional algorithmic collusion, especially
with algorithms based on reinforcement learning (Assad et al. (2024), Calvano et
al. (2020), OECD (2021)), with potential implications for algorithmic trading in
financial markets.
The data intensive nature of GenAI, combined with the reliance on a few (big
tech) providers also exacerbates consumer privacy and cybersecurity concerns (Al-
dasoro et al., 2024a).
AI agents are AI systems that act directly in the world to achieve mid- and long-
term goals, with little human intervention or specification of how to do so. While
current AI agents (like those supporting software engineering (Scott, 2024)) might
be limited in their planning ability, the pace of advancements might lead to more
capable agents in the near future. Such AI agents come with opportunities to pro-
cess novel types of information more quickly than humans and to act autonomously,
e.g., for designing software or performing data analysis. AI agents could expand
high-frequency information processing and autonomous action from trading to other
8
A recent example of this risk, outside of finance, is Air Canada being held liable for the false
information that its LLM-powered chatbot provided a customer.
12
parts of finance. For example, they could soon autonomously design, market, and
sell financial products and services.
The case of algorithmic trading might illustrate challenges with mid-term plan-
ning AI agents in other environments. Correlated failures, in the form of flash
crashes due to correlated autonomous actions might happen in a different form in
financial intermediation, asset management, insurance and payments. While for
algorithmic trading, there is a clear digitised environment with precise short-term
rewards, AI agents in other environments require more sophisticated reinforcement
loops (Cohen et al. (2024)). These action-reward loops might be created over time,
as AI agents will be more and more capable to act in unstructured, open-ended en-
vironments. Contingent upon the configuration of action-reward loops, novel risks
might emerge., including the challenge of aligning them with human goals over
longer-time horizons (Christian (2021)).
AI agents could also pose significant systemic risks if their behaviour is highly
correlated, their actions difficult to explain and missing oversight or behaviors are
not transparent or misaligned. Appendix A discusses the hypothetical influence
of AI agents on a financial crisis. For example, an AI designed for efficient asset
allocation might start exploiting market inefficiencies in ways that lead to increased
volatility or systemic imbalances.
Even with limited capabilities, computational advances already had important im-
plications for financial stability. The US stock market crash of 1987 is an illustrative
example. In October 1987, stock prices in the United States declined significantly
– the biggest ever one-day price drop in percentage terms. This was attributed
in large part to the dynamics created by so-called portfolio insurance strategies,
which relied on rule-based computer models that placed automatic sell orders when
security prices fell below a pre-determined level. Initial rule-based selling by many
13
institutions using this strategy led to cascade effects and further selling, and even-
tually the crash of October 1987 (Shiller (1988), United States presidential task
force on market mechanisms (1988)).
There are also other characteristics inherent to ML models that have implica-
tions for financial stability. In particular, the black-box nature of ML models that
arises due to their complexity and non-linearity makes it often hard to understand
how and why such models reach a particular prediction. This lack of explainability
might make it difficult for regulators to spot market manipulation or systemic risks
in time.
Like other ML models, the pervasive use of GenAI will present new challenges
(Anwar et al., 2024) and will likely also have consequences for financial stability.
As noted earlier, one of the most powerful tools made possible by language models
is the increased legibility of alternative forms of data. Compared to traditional
data sources, alternative data can have shorter time series or sample sizes. Rec-
ommendations or decisions based on alternative data may therefore be biased and
not generalisable (the so-called fat tail problem, Gensler and Bailey (2020)). Fi-
nancial or regulatory decision making based on alternative data would need to be
very mindful of this limitation.
9
Khandani and Lo (2008) argue that model herding was one of the main reasons behind the
2007 hedge fund crisis.
14
The risks arising from the use of homogeneous models highlighted above also
apply to GenAI. A key application of GenAI in the financial sector is the use of
LLMs for customer interactions and robo-advising. Since many of these applications
are likely to rely on the same foundational models, there is a risk that the advice
provided by them becomes more homogenised (Bommasani and Others, 2022). This
may by extension exacerbate herding and systemic risk.
Financial stability concerns that derive from the uniformity of datasets, model
herding, and network interconnectedness are further exacerbated by specific charac-
teristics of GenAI: increased automaticity, speed and ubiquity. Automaticity refers
to GenAI’s ability to operate and make decisions independently, increasingly with-
out human intervention. Speed pertains to AI’s capability to process and analyse
vast amounts of data at rates far beyond human capacity, enabling decisions to be
made in fractions of a second. Ubiquity highlights GenAI’s potentially widespread
application across various sectors of the economy, and its integration into everyday
technologies.
A number of systemic risks could arise from the use of AI agents. These agents
are characterised by direct actions with no human intervention and a potential for
misalignment with regards to long-term goals (Chan et al., 2024). The fundamental
nature of the resulting risks is well-known from both the literature on financial
regulation and the literature on AI alignment and control (Korinek and Balwit,
2024): if highly capable agents are given a single narrow goal – such as profit
maximisation – they blindly pursue the specified goal without paying attention
to side goals that have not been explicitly spelled out but that an ethical human
actor would naturally consider, such as avoiding risk shifting or preserving financial
stability. Moreover, even when constraints such as satisfying the requisite financial
regulations are specified, AI agents may develop a super-human ability to pursue
the letter rather than the spirit of the regulations and engage in circumvention. As
an early example, an LLM that was asked to maximise profits as a stock trader in
a simulation engaged in insider trading even when knowing it is illegal. Moreover,
when caught, the LLM lied about it (Scheurer et al., 2023). We discuss some of the
broader risks in Appendix A in a thought experiment looking at how such agents
could have interacted with known causes of the great financial crisis of 2008/09. As
AI Agents advance towards AGI, the resulting risks would be greatly amplified.
15
3.3 AI use for prudential policy
As the private sector increasingly embraces AI, policymakers may find it increas-
ingly useful to employ AI for both micro- and macroprudential regulation. In fact,
they may have no choice but to resort to AI to process the large resulting quanti-
ties of data produced by regulated financial institutions.10 Microprudential policy
concentrates on the supervision of individual financial institutions, whereas macro-
prudential policy concerns itself with the supervision of the financial system as a
whole. AI can be leveraged for both types of prudential policies but comes with a
different set of risks in each domain.
A related challenge is the uniqueness of each financial crisis. Just as this makes
it difficult for humans to predict financial crises, it also makes it difficult for AI:
although crises have some commonalities, each has its own specific risk factors which
10
See for example Araujo et al. (2024) for an overview of applications in central banking.
16
– while rationalisable ex post – are nearly impossible to understand ex-ante. The
reason is their “unknown-unknowns” characteristic (Knight (1921), Danı́elsson et
al. (2022)). Accordingly, even if AI were able to learn from past crises, the lessons
might have limited applicability for predicting the next one. Moreover, even in the
cases where AI is able to generate insights from a specific crisis episode, the policy
insights themselves will change the environment of decision making – the so-called
Lucas critique (Lucas, 1976).
However, just like humans have found ways of dealing with these challenges,
future advances in AI may open up new possibilities for macroprudential regulation
that go beyond the limitations of traditional ML. As AI systems become more
advanced, they may be able to better deal with the limited data on financial crises
by learning from a much broader set of data sources, including granular data on
financial transactions, news and sentiment analysis, and simulations of hypothetical
crisis scenarios. They may also be able to identify more generalisable patterns of
systemic risk that are robust across different types of crises. Moreover, future AI
systems that can engage in counterfactual reasoning and causal inference could
help regulators better understand the potential consequences of different policy
interventions, even in a world where the Lucas critique applies. And as AI alignment
techniques improve, it may become possible to specify clear objectives for AI systems
to optimise, while ensuring they do so in a way that is consistent with human values
and regulatory intent. By leveraging these advances, future AI could become a
powerful tool for enhancing the speed, scope, and precision of macroprudential
regulation, helping to build a more resilient and stable financial system.
17
4 Risk of AI disruption
While the financial sector is good at smoothing small shocks in the real economy
and at helping the economy adjust, large shocks to the economy run the risk of
disrupting the financial sector and thus being amplified. Advances in AI pose a
risk of disrupting many sectors of the economy and their workforce. Depending on
the extent of the disruption, this may lead to financial stability risks. This is not
just a theoretical possibility, as there are precedents of significant disruptions in the
real economy spilling over to the financial sector. For example, in the 1920s the
mechanisation of agriculture displaced more than 10% of the US workforce from
the agricultural sector and led to widespread mortgage defaults, which played an
important role in the financial crisis of 1929 and the ensuing Great Depression. A
growing view among technology experts and business leaders is that advances in AI
may be even more transformative in coming years (see, e.g., Korinek, 2023b). To be
sure, recent data do not show signs of any such large-scale disruption yet. However,
policymakers are well-advised to have contingency plans in case transformative AI
scenarios materialise.
To span the range of possible outcomes, we lay out two scenarios. The first is
an optimistic scenario in which advances in AI are more likely to benefit financial
stability. The second is a downside scenario in which the real effects of AI disrupt
financial stability. Of course, there are many realistic scenarios in between these
two extremes that are worth preparing for.
Under this scenario, one can think of AI as a positive (and moderate) produc-
tivity shock with a differential effect across sectors. Calibrating a macroeconomic
11
See Autor (2022) for a broader overview of the labour market implications of technological
change, with a focus on artificial intelligence.
18
multi-sector model using an index of exposure to AI across sectors based on Felten
et al. (2021), Aldasoro et al. (2024b) find that AI can significantly raise output,
consumption and investment in the short and long run. The supply shock may be
disinflationary in the short run if households and firms do not fully anticipate the
effects of AI in the economy. But irrespective of how agents form expectations, the
long run effect is inflationary.
This scenario could lead to a goldilocks situation for monetary policy. Greater
use of AI could ease inflationary pressures in the near-term, thereby supporting
central banks in their task of bringing inflation back to target. In the medium to
longer term, inflation could rise because of greater AI-induced demand, but central
banks could dampen demand by tightening policy. AI’s positive contribution to
growth could offset some of the detrimental secular developments that threaten to
depress growth going forward, including population aging, re-shoring and changes
in global supply chains, as well as geopolitical tensions and political fragmentation.
The positive effects on output could enhance the capacity of economies to service
debts, with positive effects on debt sustainability. The revaluation of financial assets
that would come from higher productivity could also support this process, provided
rising borrowing costs do not overshadow growth effects.
For the financial sector more broadly this scenario would come with challenges,
although not insurmountable ones. The likely job turnover that may come from
AI automating some tasks could affect spending patterns by consumers as well as
the ability to repay loans by both consumers and corporations. Knock-on effects
through defaults could in turn affect the financial sector, which itself would need
to support the resource reallocation arising from such displacements in the first
place. This challenge will of course be tougher the higher the exposure of financial
institutions to the most affected sectors.
19
considers two such scenarios where AGI is reached within five or alternatively 20
years. For simplicity and in order to highlight the key implications of such a shift,
here we consider the move to AI agents without pinning down a specific timeline.
Korinek and Suh (2024) consider these scenarios within a macroeconomic model of
automation, highlighting the wide dispersion of outcomes as a function of the speed
of automation.
20
Fourth, governments may experience a significant reduction in tax revenues if
labor markets – their primary revenue source – are undermined, thereby questioning
their debt sustainability.
Fifth, while rapid advances in AI may boost growth in countries at the forefront
of technological development, they might also lead to a new form of “intelligence
divide”. This divide could leave other countries behind, resulting in severe terms-
of-trade losses (Korinek and Stiglitz, 2021).
Finally, all these disruptions to the real economy may also give rise to political
discontent and instability, which could further undermine financial stability (Bell
and Korinek, 2023). Table 2 provides a summary.
21
5 Upgrading financial regulation for AI
The risks posed by AI expand the focus of financial regulation beyond traditional
policy objectives. In addition to policy concerns such as financial stability, mar-
ket integrity, efficiency, and competition, questions of consumer rights like privacy
and algorithmic discrimination also take centre stage. Moreover, AI introduces new
geopolitical risks, most notably those from the geographical concentration of the
production of microchips and advanced semiconductors. Striking the right balance
between harnessing the benefits of AI and managing its risks is thus crucial for eco-
nomic policy-making. This in turn requires a careful yet comprehensive regulatory
response that incorporates technological, societal and ethical considerations.
Yet not all risks associated with AI necessitate regulatory intervention. Regula-
tory measures should target risks that manifest as externalities that impact specific
policy objectives (broader financial stability, market integrity, competition, data
privacy, and consumer protection). At the same time, risks that do not generate
externalities or do not directly influence these intermediate objectives (for exam-
ple customer experience and service risks, technology adoption risks, etc) can be
managed effectively through market mechanisms. Striking the right balance is key
to avoid stifling innovation while minimising adverse externalities on the financial
system and its participants.
22
chain from development to deployment of AI. For example, the EU defined an As-
sessment List for Trustworthy Artificial Intelligence (EU ALTAI; see Ala-Pietilä et
al., 2020; EU, 2024); in the US, NIST defined characteristics for trustworthiness
as part of its AI Risk Management Framework (NIST, 2023), and China defined
responsible AI principles (China Technology Ministry, 2019). The ISO standard
ISO/IEC 23894:2023 provides guidance on risk management for AI systems. These
frameworks form the cornerstone of many AI regulatory initiatives. Although some
of the details vary, there are commonalities among them. The following is a sum-
mary list of principles:
• Safety and security. AI systems should operate reliably and safely under
all conditions, implementing safeguards against failures, misuse, or malicious
attacks.
23
5.2 Regulatory models
Bradford (2023) identifies three primary regulatory models, adopted in the US,
China and the EU. The “market-driven” regulatory model in the US is characterised
by a market-based approach that emphasises innovation, self-regulation and scep-
ticism of government intervention. The “state-driven” regulatory model in China
utilises technology for political objectives, and aims to grow the industry while
exporting technology infrastructure. The “rights-driven” regulatory model of the
EU is focused on protecting individual and societal rights and the equitable dis-
tribution of digital transformation gains. These regulatory models, while distinct,
are not mutually exclusive and show a tendency to converge towards the principles
highlighted above, as well as towards rather similar operationalisations.
In the United States, the regulation of AI has evolved from voluntary guidance
to executive actions. Initially, the Blueprint for an AI Bill of Rights in October 2022
laid foundational ethical considerations. This was followed by voluntary commit-
ments from leading AI firms in July 2023, signalling industry readiness to address
AI’s societal impacts. The shift towards regulatory oversight was marked by the
Executive Order on Safe, Secure, and Trustworthy AI in November 2023, which
mandated over 25 agencies to address AI-related harms, including security, privacy,
and discrimination. These agencies are now tasked with establishing rules, funding
research, assessing risks, and enforcing transparency through safety tests and re-
porting by AI developers. However, there has not been significant legislative action
on AI regulation.
The European Union’s AI Act, approved in February 2024, aims to ensure that
AI technologies are safe and respect fundamental rights while fostering innovation
and economic growth. This regulatory framework introduces a risk-based approach
24
that categorises AI systems according to the risk they pose to users. For example,
the act identifies specific applications of AI that pose unacceptable risks and are
therefore prohibited. These include social scoring, manipulation or exploitation of
vulnerabilities and certain uses of biometric identification. The EU AI Act also
introduces governance rules for AI applications that might pose risks to health,
safety, fundamental rights, the environment, democracy and the rule of law. For
these high-risk categories, stringent regulatory requirements are set.
To operationalise the principles above for GenAI, policymakers and stakeholder pro-
cesses have brought forward specific considerations across the value chain. These
are embodied in the EU AI Act (EU, 2024), in NIST’s AI Risk Management Frame-
work for GenAI (Barrett et al., 2023) and China’s Generative AI provisions, as well
as UN’s recent March 2024 adoption of a resolution on AI safety. These regulatory
initiatives apply to GenAI and partly also to AI agents, especially when based on
foundation models or used in high-risk systems. Chan et al. (2024) and Janjeva et
al. (2023) propose dedicated measures to apply these principles to AI agents and
to ensure greater systemic resilience. Building upon these operationalisations could
be a useful step for regulating emerging AI agents in finance.
The columns correspond to the three main stages of the AI value chain: (i) de-
sign and training, (ii) deployment end usage, and (iii) longer-term diffusion. Most
considerations are specifically mentioned in the regulations and guidelines above –
or put forward in proposals – across regions. As shown in the table, the key aspects
span the entire life cycle of GenAI systems and AI agents, from the initial design,
training, and evaluation stages, through their deployment and ongoing usage, and
ultimately to the longer-term diffusion and impact assessment. Appendix B illus-
25
trates how these considerations could be specified in the case of an Advanced AI
chatbot for loan applications.
In the design, training, and evaluation phase, main considerations cover gov-
ernance and developer guidelines, the need to create visibility through technical
documentation and information access, as well as evaluating the inherent risks and
capabilities of the AI systems.
The preceding discussion underscores the need for global cooperation on AI regu-
lation. Indeed, authorities are increasingly collaborating to harmonise regulatory
26
standards and enhance cooperation, recognising that AI transcends national bor-
ders.
6 Conclusion
This study underscores the crucial role of artificial intelligence in shaping the dy-
namics of the financial system, conceived of as the “brain” of the economy. By
12
Meanwhile, the Global Partnership on Artificial Intelligence (GPAI) and the UN AI Advi-
sory Body emphasise aligning AI development with global goals such as sustainability and equity.
China’s Global AI Governance Initiative represents a significant move towards creating a coop-
erative framework for AI governance, focusing on people-centred development and sustainable
growth.
27
studying the evolutionary path from rule-based systems to GenAI, we highlight how
AI technologies have progressively augmented information processing, risk man-
agement, and customer service within the financial sector, enhancing its cognitive
capacity. But while AI presents significant opportunities for efficiency gains and
innovation, it also introduces complex challenges, including model opacity, data
dependency, and systemic stability concerns. Thus, effective regulation and gov-
ernance frameworks are important to harness the benefits of AI while mitigating
associated risks, emphasising transparency, fairness and global collaboration. At
the same time, authorities should be mindful that not all risks need regulation –
regulation should target risks that manifest as externalities, leaving market mech-
anisms to address those that do not.
Looking ahead, continued vigilance and adaptive regulatory approaches are war-
ranted. By fostering dialogue among stakeholders and promoting interdisciplinary
collaboration, policymakers can develop robust frameworks that harness innovation
to promote societal welfare. Ongoing research and empirical analyses are essential
to deepen our understanding of AI’s impact on the financial system and to guide
informed policy decisions in a rapidly evolving technological landscape. Ultimately,
by leveraging the transformative potential of AI while safeguarding against its risks,
we can foster a more resilient and equitable financial ecosystem for the benefit of
society as a whole.
28
References
Acharya, V and M Richardson, “Causes of the financial crisis,” Critical review,
2009, 21 (2-3), 195–210.
29
Bell, S A and A Korinek, “AI’s Economic Peril,” Journal of Democracy, 2023,
34 (4), 151–161.
Chia, H, “In machines we trust: Are robo-advisers more trustworthy than human
financial advisers?,” Law, Technology and Humans, 2019, 1, 129–141.
30
Christian, B, “The Alignment Problem — Brian Christian — brianchristian.org,”
https://brianchristian.org/the-alignment-problem/ 2021. [Accessed 05-
04-2024].
Epoch, “Parameter, Compute and Data Trends in Machine Learning,” 2022. Ac-
cessed: 2024-01-17.
EU, “Regulation of the European parliament and of the Council, laying down har-
monised rules on Artificial Intelligence (Artificial Intelligence Act) and amending
certain union legislative acts,” 2024.
Gensler, G. and L. Bailey, “Deep learning and financial stability,” 2020. Avail-
able at SSRN 3723132.
31
Goodfellow, I, Y Bengio, and A Courville, Deep Learning, MIT Press, 2016.
Knight, F H, Risk, uncertainty and profit, Vol. 31, Houghton Mifflin, 1921.
and A Balwit, “Aligned with Whom? Direct and Social Goals for AI Systems,”
in Justin Bullock and et al., eds., Oxford Handbook of AI Governance, Oxford
University Press, 2024, pp. 65–85.
32
and D Suh, “Scenarios for the Transition to AGI,” NBER Working Paper, 2024,
w32255.
NIST, “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” 2023.
33
Russell, S. J. and P. Norvig, Artificial intelligence: A modern approach 2010.
Shiller, R J, “Portfolio insurance and other investor fashions as factors in the 1987
stock market crash,” NBER Macroeconomics Annual, 1988, 3, 287–297.
Suleyman, M and M Bhaskar, The Coming Wave: Technology, Power, and the
Twenty-first Century’s Greatest Dilemma, Crown, 2023.
Svetlova, E, “AI ethics and systemic risks in finance,” AI and Ethics, 2022, 2 (4),
713–725.
34
A Systemic risk from AI agents
Systemic risks could also increase due to a widespread use of AI agents. These agents
are characterised by direct actions with no human intervention and a potential for
misalignments with regards to long-term goals (Chan et al. (2024)). As a simple
exercise, Table 4 describes the hypothetical influence of AI agents in a specific
scenario: Would the 2008 financial crisis have been more severe if AI agents had
been integrated in 2008’s economy?
The first column reports some of the core reasons for the 2008 financial cri-
sis according to the literature (Helleiner (2011), Acharya and Richardson (2009)):
1) shortcomings in financial practices (inadequate risk assessment and securitisa-
tion), 2) regulation (belief-based oversight, limited oversight of rating agencies)
and 3) global industry structure (Interconnectedness of financial institutions, in-
centive misalignment) - economic policies like housing ownership programs are very
context-specific and thus excluded here.
The table suggests that the extensive deployment of AI agents in finance could
exacerbate the risk of a financial crisis. This risk stems from automated risk assess-
ments, complex automated oversight, increased interconnectedness, and incentive
misalignment. These risks depend on the extent to which different AI agents are
correlated and interact, the visibility into AI agents, effectiveness of oversight mech-
anisms and deployment restrictions and the alignment of AI agents. As the use of
AI agents with open goals and limited human intervention is still minimal, there
is an opportunity to deploy them responsibly. This responsible deployment would
involve implementing specific oversight and visibility mechanisms, with initial ex-
amples outlined by Chan et al. (2024).
35
Table 4: Hypothetical influence of AI agents on a financial crisis
Inadequate risk as- Automated risk assessments might The extent to which different AI Not much: Highly concentrated AI
sessment parse more information but be cor- agents are correlated and interact ecosystem built on similar data,
Financial related, biased, or manipulated in undesirable ways with similar training and biases
practices
Inadequate risk- Limited - potentially more complex - -
sharing AI-driven securitisation
Complexity compli- Increasing complexity, but poten- Visibility into AI agents’ operations Mostly “black-box” AI, visibility
cates oversight tial AI use by regulators and explainability lagging (Chan et
Regulation
al. (2024), Hassija et al. (2024))
Limited oversight Limited - potentially easier to scale - -
of rating agencies oversight with AI agents
36
Interconnectedness Interdependent agents with Effective oversight or implementa- Regulations since 2008 on inter-
Global
of financial institu- opaque, global interactions or tion of “circuit breakers” connectedness (e.g., Basel accords),
industry
tions non-correlated agents identifying limited on AI in finance (Consulich
structure
interconnections beforehand et al. (2023), Chia (2019))
Incentive misalign- AI agents’ alignment could be bet- Alignment of AI agents AI Alignment mostly unsolved
ment ter or worse with public vs. finan- problem (Christian (2021))
cial professionals’ interests
B Operationalising oversight principles: Consid-
eration for an AI chatbot for loan applications
The following figure outlines a comprehensive framework for the design, train-
ing, testing, deployment, and long-term management of chatbots in the financial
sector. The figure shows what measures need to be adopted so that the chatbot
satisfies the discussed principles. As shown in the figure, the framework highlights
key considerations across the chatbot lifecycle, emphasising the importance of coor-
dinated governance, technical documentation, and post-deployment monitoring to
ensure compliance, mitigate risks, and track the evolving adoption and implications
of GenAI and AI agents within the financial services industry.
37
38
Figure 3: Example – Oversight considerations for Advanced AI chatbot for loan applications
Previous volumes in this series
1193 Aging gracefully: steering the banking sector Patrick A Imam and Christian
June 2024 through demographic shifts Schmieder
1190 CEO turnover risk and firm environmental Giulio Cornelli, Magdalena Erdem
May 2024 performance and Egon Zakrajsek
1189 Sixty years of global inflation: a post GFC Raphael Auer, Mathieu Pedemonte
May 2024 update and Raphael Schoenle
1188 Finding a needle in a haystack: a machine Ajit Desai, Anneke Kosse and
May 2024 learning framework for anomaly detection in Jacob Sharples
payment systems
1187 Nothing to hide? Gender and age differences Olivier Armantier, Sebastian Doerr,
May 2024 in willingness to share data Jon Frost, Andreas Fuster and
Kelly Shue
1185 Allocative efficiency and the productivity Lin Shao and Rongsheng Tang
May 2024 slowdown
1183 Why DeFi lending? Evidence from Aave V2 Giulio Cornelli, Leonardo
May 2024 Gambacorta, Rodney Garratt and
Alessio Reghezza
1182 Reserve requirements as a financial stability Carlos Cantú, Rocio Gondo and
April 2024 instrument Berenice Martinez