The Illusion of Control Why Financial Crises Happen, and What We Can (And Can't) Do About It
The Illusion of Control Why Financial Crises Happen, and What We Can (And Can't) Do About It
The Illusion of Control Why Financial Crises Happen, and What We Can (And Can't) Do About It
Jón Daníelsson
A catalogue record for this book is available from the British Library.
10 9 8 7 6 5 4 3 2 1
Acknowledgments vii
2 Systemic Risk 6
3 Groundhog Day 26
Notes 255
Bibliography 259
Index 267
vii
The man who didn’t trust the models saved the world.
risk, volatility. The accompanying drawing depicts the annual stock mar-
ket risk in the United States, and it confirms the usual suspects (Figure 1).
But these are not the riskiest years by a long shot. In 1962 and 1983 we
were almost hit by the ultimate tail event, even though the financial mar-
kets remained calm. The Cuban missile crisis nearly brought the United
States and the Soviet Union to blows in 1962, only for the Soviets to back
down at the last minute. Even more interesting is 1983 because that is
when we almost got into a nuclear war, even if we didn’t know that until
much later.
What happened was that the premier of the Soviet Union, Yuri An-
dropov, got it in his head that the United States was planning to launch
a preemptive nuclear strike. He instructed his spies to find evidence sup-
porting his suspicion, and KGB agents everywhere went into overdrive,
looking for that evidence. Careers depended on it. If you have a choice
of a juicy posting in Washington or being literally sent to Siberia (as the
KGB rep in Novosibirsk) of course you find proof. A prime example of
confirmation bias. We believe something terrible will happen and find
grounds to support that, even if there is no truth to it. In 1983 the early
warning models of the Soviet Union detected a nuclear attack. The man
on watch that night, Stanislav Petrov, didn’t trust the signal and decided
unilaterally not to launch a counterattack. The man who didn’t trust the
models saved the world. The Soviet investigators subsequently confirmed
he was right. The false alarm came about because of a rare alignment of
sunlight on high-altitude clouds above North Dakota and the Molniya
orbits of the detection satellites. Colonel Petrov died in 2017, by that time
globally recognized for having saved humanity.
The way we measure financial risk today, with what I call the risk-
ometer, has much in common with Andropov’s early warning systems.
Both rely on imperfect models and inaccurate measurements to make cru-
cial decisions. While high-altitude clouds bedeviled the Soviet’s models,
the problem for today’s riskometer arises from its emphasis on the recent
past and short-term risk. The reason is simple. That is the easiest risk to
measure, as the modelers have plenty of data.
The problem is that short-term risk isn’t all that important, not for
investors and especially not for the financial authorities. For them, what
matters is systemic risk, the chance of spectacular financial crises, like the
one we suffered in 2008. Long-term threats, like systemic crises or our
financial authorities aim to get the best out of the financial system by
making all the financial institutions prudent, like Volvos, the world’s
safest car. Nobody makes crazy investments, and everybody follows the
rules. Surely, then, investors will enjoy stable returns, banks won’t fail,
and financial crises will not happen. Will turning the banks into Volvos
make us safe? No. It perversely makes crises even more likely, since it re-
duces the shock-absorbing capacity of the system.
Is artificial intelligence, AI, the proper response to the challenges of
endogenous risk? It depends. AI will increase efficiency, allowing us to
eliminate a lot of tedious risk management and compliance jobs. Finan-
cial services will become cheaper and more reliable, and regulations will
be better enforced. So what can go wrong? Imagine the Bank of En-
gland bot—BoB—is put in charge of financial stability. He talks to his
counterparts in regulated banks, passing on information and enforcing
compliance. It sounds like the perfect way to control the financial system.
Except we have to trust BoB to make the right decisions. And, unlike
humans, we don’t know how he reasons and decides. So what will happen
when BoB runs into some problem he has never seen before? A human
being can draw on their accumulated experience and the canon of human
knowledge. AI will not be able to do that. Meanwhile, it is pretty easy for
hostile agents to take advantage of BoB. He has to look everywhere. A
hostile agent—some trader or terrorist or nation state or criminal—only
has to find one weakness to exploit. And they can do that in complete
secrecy until it is too late for BoB to do anything about it. BoB cannot
win regardless of the state of technology.
These observations all sound a bit pessimistic, but that is not the impres-
sion I want to leave. The financial system is highly resilient and by and large
does a good job—most public comments to the contrary. However, the
way we deal with the system today takes us in the wrong direction. So, what
to do? Embrace diversity, the most potent force of financial stability and
good investment performance. The more different the financial institutions
that make up the system are and the more the authorities embrace that very
diversity, the more stable the system becomes and the better it performs.
To the benefit of us all. What gets in the way is self-interest and politics.
The decision makers are antidiversity, preferring uniform ways of doing
business to protect their profits and jobs. All that is needed is political will.
The financial crisis was a pest epidemic, spreading with raging speed from
house to house.
—Stephan Skalweit
facilitated trade throughout Europe and beyond. Leendert took full ad-
vantage, building up one of the world’s wealthiest and most prestigious
banks by financing the Prussian side of the war. He lived well, furnishing
his house only with the finest-quality objects and owning an excellent
collection of paintings—but not a single book.
The way he made his money was thoroughly modern: rapid, irrespon-
sible financial innovation in the form of acceptance loans, not all that
different from the financial instruments so damaging in the 2008 crisis.
Cheap short-term borrowing was used to make long-term loans at high
interest rates, involving long chains of obligations, spanning multiple
banks and countries.
The key to all the profit was borrowed money. For every twenty-three
guilders he lent to Prussia, De Neufville supplied one and borrowed
twenty-two, secured by commodities. Highly profitable in good times,
but it didn’t take much for things to go wrong, as they did, spectacularly,
when the war ended in 1763, culminating in the first modern financial cri-
sis.1 Commodity prices crashed when the war ended because the farmers
could now finally start producing again, making all the commodity-based
collateral behind De Neufville’s acceptance loans worth little. Investors
got spooked and decided not to roll over these short-term loans—they
all went on strike, just like their successors in 2007. As De Neufville didn’t
have enough ready cash to repay his creditors and keep his bank alive, he
had to sell his vast holdings of commodities. However, that only caused
prices to fall further, a process known as a fire sale. Falling prices induce
speculators to sell, and when they sell, prices fall further, in a vicious loop.
It did not take long for De Neufville to default.
Thus was born the first global systemic crisis, described at the time as
a pest epidemic. It spread with raging speed from house to house. The
city of Hamburg was hit hard, and on 4 August its mayor wrote to the
mayor of Amsterdam asking him to bail out De Neufville. Amsterdam
refused. Fortunately, the crisis turned out to be relatively short-lived, and
Amsterdam, Hamburg, and the other major centers recovered quickly.
Berlin suffered badly because of how its emperor, Friedrich II, reacted.
He imposed a payment standstill and bailouts, violating the contracts that
allowed funds to flow from Amsterdam to Berlin and causing the bankers
be either the root cause or the amplifier for a crisis to be called systemic.
Since the financial system is so fundamental to the economy, most eco-
nomic crises are also systemic crises. What about Covid-19? It certainly hit
the economy, shaving 10 percent to 20 percent off most countries’ GDP
in the second quarter of 2020. But the financial system was a bystander,
not the cause, and it made things neither better nor worse. The Covid-19
crisis was not systemic.
The focus of systemic risk is not on any individual financial institution;
instead, it is on the financial system in its entirety and how it affects the
real economy. The failure of a bank, or even a banking crisis, is not neces-
sarily systemic. We need a connection between the financial system and
the real economy.
Systemic crises are costly, easily 10 percent of GDP or more, so for the
United States in the trillions of dollars. Fortunately, they are not frequent,
and I think most people can expect to suffer such a crisis at most once
in their lifetime. If one takes the relatively loose definition used in the
IMF crises database, then maintained by Luc Laeven and Fabián Valencia,
we find that the typical OECD country suffers a systemic crisis once in
forty-three years on average. While the one in forty-three years is the his-
torical average, it is hotly debated whether the future will be as tranquil.
A lot of commentators maintain that increased complexity and intercon-
nectedness make financial crises more frequent. The United States, and
especially the United Kingdom, are more crisis prone, the UK enduring
a systemic crisis once every seventeen years. The last one was in 2008, so
2025 is the due date for the next one if it arrives on schedule. If anything,
the once-in-forty-three-years figure is an overestimate, as the database
includes relatively nonextreme events like Black Monday in October 1987
and the Long Term Capital Management crisis in 1998. If we exclude all
the mildest crises, we get about one systemic crisis per typical lifetime.
The reason for the once-in-a-lifetime frequency is that the most severe
crises happen only after we forget the last one. Crises change behavior,
and those who come of age during one will be affected by it for the rest
of their lives. We had to wait for the twenty-year-olds of 1929 to retire
in the 1970s for the seeds of the next crisis to be sown. It then took a
quarter-century for the seeds to bear fruit, culminating in the events of
autumn 2008. Politics and lobbying push toward a high-risk, high-return
financial system, and when memories of the previous crisis fade there is
little pushback.
Considering the once-in-a-lifetime frequency of systemic crises, the
term “systemic crisis” is as overused today as it was underused before
2008. Most of this usage is imprecise and contradictory, and one often
gets the impression that commentators are talking only about the last cri-
sis or financial scandal when they use the phrase “systemic crisis.”
The financial system is vulnerable to many types of shocks, some com-
ing from outside the financial system, like Covid-19, and others generated
by the system itself. Some shocks are idiosyncratic, affecting only a single
institution or asset, while others impact the financial system as a whole
plus the real economy. The small shocks often arrive from outside the sys-
tem; all the large ones are created by the interaction of the human beings
who make up the financial system. There may be an outside trigger, but
the real damage is caused by the system turning on itself.
If I had to describe a textbook financial crisis, it would go something
like this: Financial institutions have too much money to lend. When they
run out of high-quality borrowers, they start making increasingly low-
quality loans, often in real estate. In the beginning it all looks ingenious.
Developers borrow money to build new houses, which stimulates pros-
perity and demand for homes. Property prices go up. Everybody feels
wealthier and more optimistic, thereby encouraging more lending and
more building in a happy, virtuous cycle. This caused many a crisis, like
the savings-and-loan debacle in the United States in the 1980s and the
Spanish crisis in 2010.
Eventually, when the little boy yells, “The Emperor has no clothes!”
people realize that all the prosperity was built on sand (Figure 2).3 There
is no strong underlying economy, and it all comes crashing to the ground;
the virtuous feedback loop becomes vicious. Prices fall, developers fail,
the banks lose money, the economy contracts, prices fall more—the same
fire sale process De Neufville endured. The precrisis rise in prices is much
slower than the fall: prices go up the escalator and down the lift (or eleva-
tor if in America).
Most crises follow the textbook. The one in 1914 was atypical, which
is why it is my favorite. It was triggered by the assassination of Archduke
Franz Ferdinand of Austria on 28 June 1914, leading to posturing among the
financial system meant that everybody was vulnerable. Even banks that
thought they were sensible, like conservative rural banks trading only
with major London banks and judiciously avoiding taking on too much
risk, were nevertheless immediately affected.
First, those dealing with the continent got into difficulty, as they had
to deliver to the continental trading partners but did not receive cash
from their European counterparts. Then their business partners got into
trouble, and in short order those dealing with them faced difficulty. In
the financial system everybody is exposed to everybody, whether they
like it or not. We might think we are prudent by not doing business with
those who like a lot of risk. However, if the people we are dealing with
are exposed to risk, so are we.
A good example of the network effect, why financial crises are often
said to be like a contagious disease, is as follows. Patient Zero infects the
people around her, who in turn infect the people around them, and in
short order everybody is sick. Similarly, the vulnerability quickly spreads
from Bank Zero to all the other banks. The financial system is vulnerable
not only because of risk but also—and even more so—because of how
everybody is connected to everybody else.
London was at the center of the network in 1914, a place it had by
then already occupied for a century and a half, after eclipsing Amsterdam.
Made in 1774, a drawing shows the network system of the European fi-
nancial system, with London in the center (Figure 3). We have been mak-
ing network diagrams ever since, and after the crisis in 2008 the creation
of network diagrams became a veritable industry. Such diagrams never
made much sense to me, as all they show is that everybody is connected to
everybody else, which we know anyway. The players change, but the net-
work is always there, the source of simultaneous danger and prosperity.
The network makes the economy efficient, creating wealth, good jobs,
and ample cheap goods, but it also transmits crises, both financial and
medical. The plague in the 1300s was transmitted by trading networks,
just as Covid-19 was in 2020.
If the 1914 crisis had been purely financial, nobody would really have
cared besides a handful of people involved with the financial system. But
it never is. The financial system directs resources for the real economy,
and if it is not doing its job companies can’t borrow and invest. They can’t
pay suppliers and salaries, the very essence of a systemic crisis. Economic
activity grinds to a halt, and everybody suffers. That is the reason finance
is so heavily scrutinized and regulated, and why the banks always get
these obnoxious bailouts when they misbehave.
Once under way, the 1914 crisis followed predictable patterns. Market
participants became very cautious, wanting the safest asset: gold. Banks
deposited their excess funds to the safety of the Bank of England instead
of lending them out, so that the crisis quickly spread from the City of
enough to trigger the crisis. People did not need the actual event of a war
to make this happen; the mere expectation of one was sufficient.
The financial authorities in 1914 did the right thing, effecting a massive
intervention, saving the City of London from the worst and, along with
it, the real economy. By comparison, events in the 2008 and Covid-19
crises were quite mild. However, the ultimate consequence for the City
of London was severe. It was the most important financial center in the
world in 1914, but by 1918 was eclipsed by New York, which still retains
the crown.
There are two main ingredients in the typical crisis story: excessive le-
verage and an interconnected financial system; in other words, too much
risk and a dangerous network structure. But we don’t need extreme risk-
taking for a crisis to happen. The 1914 crisis did not happen because of
too much risk. No, instead, the forces of instability preyed on the very
sophisticated financial system we depend on to give us efficient financial
intermediation and leverage, helping the economy grow—but only in
good times. That fantastic financial system also acts as the catalyst for
financial crises. And that means it is all about politics.
eliminated, all the standard things like excessive risk-taking, opaque finan-
cial instruments, and perverse incentives. In the Q&A session I asked the
panel whether it was possible that the governments also caused systemic
risk. The chairman of the panel shouted at me, “No, the governments are
the solution, not the problem.”
Systemic crises are serious. Hundreds of millions of people suffered
in the Great Depression, driving the politics that caused World War II.
People lost their jobs, their savings, and even their lives. Populism thrived.
No wonder we would like to prevent them. The first lecture of my LSE
course Global Financial Systems is about systemic risk. About twenty
minutes into it I ask the students the following question: “Those of you
who would want to live in a country without any systemic crises, please
raise your hands.” In a typical year about eighty out of one hundred stu-
dents will do so. If I reverse the question and ask them if they want to live
in a country with systemic crises, nobody raises their hands. If we were to
ask most politicians, journalists, regulators, and pundits the same ques-
tion, I suspect most would agree with the students.
It is straightforward to prevent systemic crises: both Cuba and North
Korea manage it quite well. Get rid of the financial system, and voilà, no
more systemic crises. Easy, but the costs are unacceptable. Who wants
to live in Cuba or North Korea? We need the financial system to take
risk. With risk comes failure, an essential part of a healthy economy; an
economy without failures does not take enough risk and does not grow
fast enough. While it is easy to regulate risk out of the financial system,
it is not easy to do so without killing economic growth at the same time.
On the other hand, too many bankruptcies and crises are also a sign
that something is wrong. The financial system could be too unstable,
with excessive risk, perhaps too much corruption, or money channeled
to unproductive uses for political reasons. There is a balance between
having too many and too few crises. We need to encourage the financial
industry to take enough risk so the economy grows while also preventing
too much risk from causing systemic crises. A classic risk–return trade-
off, and one that is not trivial.
Economic crises afflict poor, less developed countries more frequently
and severely than their rich and more developed counterparts, not least
because developing countries are more likely to suffer from bad economic
into big risks—voilà, crises are eliminated. That logic is fatally flawed. It is
true that the whole of the financial system is made up of everything that’s
going on within it, all the way down to individual transactions. We can
know all about the atoms that make up the human body and be experts
in biology, but all that knowledge tells us nothing about what makes a
person tick. It’s the same in finance. We simply don’t know how to add
up all the individual micro risks to get the risk in a portfolio, bank, or the
system. If we think we do, all we end up with is a false sense of safety.
We thought risk had magically disappeared by 2007 because all the
micro risks were under control. What we missed was that all the sophis-
ticated risk management techniques accomplished was to increase sys-
temic risk. The reason is simple. The financial system is, for all practical
purposes, infinitely complex, so no matter how intensively we study the
system and how hard we try to control it, we still can focus only on a
tiny part of it—systemic risk emerges precisely where we are not looking.
Paradoxically, the sophisticated risk management techniques increased
the complexity of the financial system, thereby creating new avenues for
crises to emerge.
Take US subprime mortgages, an important contributor to the 2008
crisis. The value of American subprime mortgages was about $1.3 trillion
in March 2007. In normal conditions only a fraction of those borrowers
were likely to default. In a disaster scenario, if half the borrowers default
and only half of the creditors’ money were recovered, total losses would
be $325 billion. While that figure sounds like a lot, it is quite small com-
pared to the overall size of the American financial market. Right before
the crisis in 2007 the outstanding volume of bonds in the United States
was $32 trillion, or twice the GDP, while the total value of the stock mar-
ket was $20 trillion, so a $325 billion loss is about 1.6 percent of the stock
market. A loss of that magnitude on the US Standard and Poor’s 500
index has happened on 1,274 days since 1929, or 5.5 percent of the time.
The actual subprime losses turned out to be much smaller.
How can a worst-case potential subprime loss of $325 billion cause so
much damage when the stock market can suffer losses of $325 billion
multiple times without batting an eye? The reason is that stock market
losses are visible and expected. The mortgage losses were opaque; the
risk was hidden, and it caught almost everybody by surprise. I say almost
because some did anticipate it, like the heroes of Michael Lewis’s book
The Big Short.
Two thousand eight was not the first time a financial crisis played out
this way. Financial crises tend to be quite similar to each other, and the
crises of 1763 and 1914 have plenty in common with 2008. Why then did
2008 happen? Hubris and forgotten history. While the policy makers of a
hundred years ago were alive to the danger of systemic risk, their twenty-
first-century counterparts assumed the problem away. I recall being in
central bank conferences in the early 2000s and hearing most speakers
argue that the central banks should focus only on inflation. Financial sta-
bility was not something the central banks should be concerned with
because it is impure, sullying the pristine reputation of monetary policy,
so crucial for price stability. Precisely why the financial authorities were
caught by complete surprise in 2008. The Bank of England, deciding in
the early 2000s that all that mattered to it was monetary policy, proceeded
to close down divisions focusing on financial stability and regulations, so
when the crisis in 2008 started it was desperately short of expertise. The
then governor of the Bank of England, Mervyn King, said in August 2007
after the crisis was already under way and Northern Rock was failing,
“Our banking system is much more resilient than in the past. . . . The
growth of securitization has reduced that fragility significantly.” It wasn’t
only the central banks. Just about the only academic institution con-
cerned with systemic risk was the London School of Economics. Credit
goes to Charles Goodhart, who constantly reminded us that systemic risk
was important and worth studying. Still, it was not good from a career
perspective. I recall getting a referee report from a top journal on one of
my papers on crises in 2003, rejected as “irrelevant because the problem
of crises has been solved.”
The reaction to the Covid virus has its origins in the fumbling response
to the crisis in 2008. Having been caught short in 2008, the policy au-
thorities in 2020 responded with vigor, aided by strong political sup-
port and determined to prevent a repeat of 2008. Policy making is often
like that. Underreaction followed by a rearview-mirror-guided over-
reaction. Unfortunately, though crises are all fundamentally the same,
the details differ, and regulations target the details much more than the
fundamentals.
Covid-19 offers a fantastic example of the trade-off between safety and
growth: Do we lock the economy down to prevent the virus from spread-
ing but at the expense of very significant economic damage, or keep
everything open and hope for herd immunity? The answer to that ques-
tion shows why we need political leadership; the authorities concerned
with health tend to prefer lockdown, and the economic authorities to
keep things open. It is the job of the prime minister or president to arbi-
trate and decide on the best course of action. Directing the fight against
Covid-19 is not a job that can be delegated to officials.
We have the same debate in the financial system: Do we seek risk and so
deregulate the system, hoping for more growth, or do we nail the system
down to prevent crises? Both camps have their adherents, and it is the job
of the political leadership to decide.
The Covid-19 and 2008 crises have a lot in common. Like most, they
comprise a four-step process. Willful dismissal of the threats before the
crisis event, followed by a weak initial reaction. When things get so bad
they can’t be ignored, we get overreaction and eventually, as we gain
knowledge and experience, an uneasy balance between safety and growth.
But the differences the two crises are more important. When the Covid-19
was under way I wrote a piece with three of my LSE colleagues, Robert
Macrae, Dimitri Vayanos, and Jean-Pierre Zigrand, titled “The Corona-
virus Crisis Is No 2008,” in which we argued that these two crises were
quite different. The 2008 crisis originated within the financial system,
caused by its willingness to take risk, aided by the willful ignorance of the
dangers and excessive complexity. Covid-19 came from nowhere, certainly
not the financial system. While the policy authorities reacted strongly to
Covid-19, finance was a small part of that, and we haven’t seen any finan-
cial crisis to come out of Covid-19.
So Covid-19 impacted the financial markets, but it cannot be called
systemic in a financial sense. What Covid-19 does do for us is to provide
a handy framework for thinking about financial regulations, what works
and what doesn’t work. I’ll return to that theme once I have erected the
necessary scaffolding to make the argument. But as a brief preview, recent
very strong political backing, all helped by questionable actions like long
solitary confinement for those accused but not yet convicted. In the end,
the bankers were convicted not for causing the crisis but for minor mis-
conduct relating to individual transactions: just like the US authorities
got Al Capone for tax evasion, not for being a mafia boss. The regulators
and most of the political leadership got off scot-free, even if they are as
much at fault as the bankers. The Central Bank of Iceland, which bore
so much blame for allowing the crisis to happen, fired the governors but
promoted everybody else. So, ironically, the career bureaucrats who were
responsible benefited.
The Icelandic case is an extreme form of what we have seen in other
countries. The prosecutors can go after bankers for specific misconduct,
some abuse of office, but they cannot convict anyone for causing a crisis.
Spain sent its former finance minister and IMF chief Rodrigo Rato to jail
for four years and six months for embezzlement. He was not convicted
for the failure of his bank (Bankia), rescued at huge expense by the Span-
ish taxpayer. He was not punished even for the losses suffered by two
hundred thousand savers whom he had persuaded to buy subordinated
Bankia bonds right before it failed. No, he misused his corporate credit
cards. One might wonder what the Spanish regulator was doing all along,
not only condoning but also actively encouraging such abuse of unso-
phisticated investors. Why aren’t they prosecuted?
The former CEO of Barclays, John Varley, is the only CEO of a global
bank to face charges because of his conduct in the financial crisis. How-
ever, that was only because of how he tried to save his bank, not for
getting it in trouble in the first place. The United States has sent at least
thirty-five bankers to jail for crimes related to the financial crisis, most
relating to small amounts of money at small banks for personal gain. One
person, known as Fabulous Fab—real name Fabrice Tourre—was con-
victed of crimes relating to structured credit products while working as a
junior employee at Goldman Sachs. He and everybody else convicted in
the United States were small-fry.
Even when there is clear abuse we cannot find anyone to blame. The
best case is LIBOR, the standard benchmark interest rate used to decide
on interest rates charged on mortgages and loans worldwide. It was sur-
prisingly easy to manipulate LIBOR, as it was based on an average of
banks’ estimates of market interest rates, and the employee making the
submission in each bank had some leeway in the number she came up
with. Is it 5.25 percent or 5.26 percent? A derivative trader who knows
which of these two numbers is to be submitted has more than an even
chance of profiting, and it is alleged that employees across banks col-
luded in their submissions, guaranteeing profits. The banks maintain it
was all the fault of rogue junior employees, and nobody higher up had an
inkling of the abuse. Still, they didn’t question the profits from manipu-
lating LIBOR. Did the regulators know? While strenuously denied, many
observers allege that banks deliberately posted excessively low numbers
during the 2008 crisis with the acquiescence of the regulators in order to
lower funding costs for banks in difficulty.
The manipulation of LIBOR is very costly for lenders and borrowers,
and one might think the abuse would be severely punished. Not so. Em-
ployees in several banks were found to have been manipulating LIBOR,
and some banks have admitted to doing exactly that. Exactly one per-
son has, at the time of writing, been punished: a junior UBS and then
Citigroup employee who was sent to jail by the British authorities for
eleven years. No senior manager, financial institution, or regulator has
been prosecuted or convicted for the LIBOR abuse.
The reason is that it is really difficult to find anyone guilty in a court of
law. The prosecutors in the United Kingdom and the United States have
tried but mostly failed. The bankers might be responsible for causing a
crisis in the court of public opinion, but the Icelandic bankers, like their
counterparts in other countries, were careful not to break the law. Stupid-
ity is not a crime. Greed is not a crime. Legally manipulating the rules
is not a crime. Recklessness was not a crime in 2008, even though it has
become a criminal offense in the United Kingdom since.
The bankers, regulators, and politicians cannot be convicted of a crime
that does not exist, and we cannot (or should not) change the law retro-
actively. When we do a postmortem on financial crises, we see that the
bankers excessively expanded their banks, taking on too much risk, not
properly scrutinizing lending decisions, and ignoring liquidity risk. The
regulators looked the other way, and the politicians cheered the excesses.
It may be greedy, incompetent, arrogant, and immoral, but not, so long
as the letter of the law is complied with, illegal.
Systemic risk is the chance that the financial system does not do its job,
perhaps that a major financial crisis causes an economic recession. It hap-
pens because of the typical dilemma ever-present in the financial system.
We want economic growth, and that necessitates risk. With risk comes
the chance of failure and crisis. It is hard to prevent systemic risk because
it emerges in the most obscure parts of the financial system, exploiting
unknown vulnerabilities, making the policy makers’ job difficult—but
certainly not impossible. We know quite a lot about why crises occur and
how to prevent them. Yet they are all too familiar.
26
Do We Need Banks?
Banks are essential for the economy. They provide financial in-
termediation—the channeling of funds from one person or entity to an-
other across time and space. Banks reallocate resources, diversify risk,
allow us to build up pensions for old age, and enable companies to make
multidecade investments. At one end, a large number of savers put small
amounts of money into the banks, expecting their money to be safe and
available on demand. At the other end, the banks make a small number
of large thirty-year loans to companies building factories. Banks are also
dangerous and exploitative. They take advantage of their clients, fail, and
cause financial crises. The response of society is to enjoy the benefits of
the banks while also regulating them heavily.
Easy in theory, hard in practice. If we don’t want banks to fail, they can
make only safe loans, and that means lending only to risk-free govern-
ments—if we can find any. If the banks are too safe, the cost of thirty-
year loans would be too high while interest on bank deposits would be
too low, so people would not save and companies would not borrow.
The factories would not get built and the economy would not grow. We
could not borrow money to buy a car or a house, nor save for old age.
The argument is reminiscent of that made by the Peruvian economist
Hernando de Soto in his book The Mystery of Capital, that a system of
property rights and enforceable legal contracts is a necessary condition
for economic prosperity.
The result is one of the dilemmas we find so often in the financial sys-
tem. If we make the banks too safe, we throttle economic growth. If we
are too eager and unshackle them, the banks fail. Thus we are continu-
ally debating the trade-off between safety and growth. After the Great
Depression we heavily regulated the banks, then deregulated them after
the Bretton Woods system collapsed in 1973, and turned back to regula-
tions after the 2008 crisis. There are early indications that the regulatory
pendulum has now started to swing back, with banking regulations to be
relaxed yet again. That process was already under way when Covid-19 hit
in 2020, accelerating the process.
There is nothing unique about the safety or growth debate in the bank-
ing system. It applies to most areas of the public domain, like speed lim-
its, and today it is most strongly manifested in the debate over how to
respond to Covid-19. Do we lock everything down to halt the virus but at
the expense of killing the economy?
Banks are inherently fragile, but it is hard to address that fragility. Sup-
pose a chocolate maker is in difficulty. If the business’s debts exceed its
assets, it is insolvent and will be shut down. Most likely, someone else will
buy the factories and continue production. And if not, competitors will
happily step in. The disruption to society is minimal; shareholders and
perhaps employees lose out but not many others.
Banks are different. They can fail even when prudently run and, con-
versely, remain solvent so long as they are trusted. If I believe my bank is
well managed and properly regulated, I will keep my money in a bank so
it can continue operating. If, however, I lose faith in the bank and take
my money out, that, by itself, can cause the bank to fail, even if it is pru-
dently run and solvent.
A bank run can happen at a bank even if there is nothing wrong with
it. All we need is for depositors to get worried, then a bank run becomes
a self-fulfilling prophecy. I remember seeing a news item on CNN con-
cerning a bank about to be closed down. The journalist made a mistake,
using an image of a different bank. That is all it took for the other bank
to be hit by a run.
Why are banks fragile? Two main reasons. The first is bank runs. If a
sufficient number of depositors want their money back, the bank cannot
fulfill all those requests because most of its assets are tied up in long-
term loans. I can take my money out of the bank anytime I want, but my
bank cannot and should not be able to call in its thirty-year loans when
it pleases.
My favorite description of a bank run comes from the 1946 movie
It’s a Wonderful Life, featuring James Stewart, who plays a banker faced
with a bank run during the Great Depression. In the following scene, he
addresses the angry crowd demanding their money back. Though the
transcript doesn’t do it justice, you can (at the time of writing) find it on
YouTube: “No, but you, you . . . you’re thinking of this place all wrong.
As if I had the money back in a safe. The money’s not here. Your money’s
in Joe’s house . . . right next to yours. And in the Kennedy house, and
Mrs. Macklin’s house, and a hundred others. Why, you’re lending them
the money to build, and then, they’re going to pay it back to you as best
they can. Now what are you going to do? Foreclose on them?”1
When a bank is hit with a run it may fail. It is not insolvent as it has
more assets than liabilities, but it is illiquid—it cannot convert its assets
into cash on demand. It is a little like a situation in which I suddenly
need a large amount of money, far more than what I have in my bank
account. While I own a house in London, it will take time to sell it,
and I cannot satisfy an immediate demand for a large amount of cash. If
depositors become worried about their bank’s solvency—perhaps it has
made too many bad loans—depositors will want their money back. If
enough depositors agree, they will queue up in front of the bank to get
the money—the classical definition of a bank run.
The second bank fragility arises because of how banks create money.
Every country in the world uses fiat money: money that is the creation
of the central bank as an agent of the government and underpinned by
its stability and reputation. Money takes many forms. The central banks
create the monetary base, M0, consisting of money held on account
with the central bank plus the total amount of physical money: notes and
coins. That is, however, only a fraction of the overall amount of money
in the system, the reason being we have a fractional reserve system where
the banking system creates money. Suppose the reserve requirement is
1 percent. If I deposit €100 into a bank account, the bank has to hold on
to €1 (reserve requirement) and can lend out €99. I still own my €100 and
can spend it whenever I like, but now the borrower has €99, and she can
spend it also whenever she likes: together we have €199 ready cash, known
as M1. If the borrower then leaves the money in her bank account, her
bank can lend out 99 percent of that, €98, and that can, in turn, be further
lent out, etc., etc. This is the most basic way the banking system creates
money. So what are the amounts?
In the eurozone in August 2018 base money was €3.2 trillion, but the
amount of physical money in circulation plus demand deposits was M1
(€8.1 trillion), M2 (€11.1 trillion) further adds savings accounts, and M3
(€12.0 trillion) is large amounts of money that are locked for a period of
time deposits, institutional money market funds, short-term repurchases,
and other, similar assets. Every euro created by the European Central
Bank becomes €3.4 when the banks are done with it. Even that captures
but a fraction of the money in the system because as we go about our
daily economic life we constantly create new money simply by borrow-
ing and lending. Nobody knows how much money is in the system, and
certainly nobody controls it. That is why it is so hard to control inflation.
The amount of money has a direct impact on the fortunes of the econ-
omy. If the economy is growing rapidly, the supply of money needs to
grow with it to prevent deflation. If, however, the supply of money col-
lapses, there is not enough money to maintain economic activity, and
the economy goes into recession. Now you see why it is dangerous when
banks are clamoring for liquidity in a crisis. They take all the higher forms
of money, M1, M2, M3, and beyond and convert them into M0. The sup-
ply of money is collapsing, taking the economy down with it.
These two vulnerabilities, bank runs and the nature of money, come
together in banking crises. A bank run can lead to cascading failures be-
cause depositors might view a single bank’s bankruptcy as a symptom of
system-wide difficulties. Depositors have limited information about the
quality of banks’ assets and may feel that if hidden problems have been
allowed to fester in one bank, they might also be present in other banks.
They see bankers as incompetent and greedy and the regulators as bun-
gling and even corrupt.
The best-known example of bank runs causing a systemic crisis hap-
pened during the Great Depression of 1929 to 1933. The United States
lost over a third of its banks to bankruptcies, and people preferred to store
their money under their mattresses rather than keeping it in banks—the
backdrop to It’s a Wonderful Life. The panic originated in 1931 with the
Bank of United States (which specialized in immigrant clients) mislead-
ingly and deliberately implying by its name that it was government-
owned. In early 1931 rumors spread that it might be in trouble. Initially,
people if they know better examples. Please get in touch if you know of
any. Two are from Latin America: Venezuela in 1994 and the Dominican
Republic in 2003, in both cases involving banks that were so large as to
be systemically important. The banks did not record deposits as liabilities;
instead, the insiders (senior management and owners) looted the banks
from within by stealing their assets. Because of the banks’ systemic im-
portance, the central bank felt it was necessary to compensate depositors.
At a high cost: the macroeconomy’s destabilization. In Venezuela the
rogue bank paid high interest rates, forcing other banks to do the same,
thereby weakening the entire banking system. More recently about a bil-
lion dollars disappeared over two days from three Moldovan banks that
had been taken over by unknown owners in 2012. The money was sent to
British- and Hong Kong–based companies with equally opaque owner-
ship. The Moldovan government was forced to bail out the banks at the
hefty cost of 15 percent of GDP.
Other crises have been caused by external events like wars, natural di-
sasters, or a change in the political system. Even viruses. Some have their
roots in disastrous economic policies, as in Venezuela today. However,
these origins of banking crises are uncommon and, since they are caused
by outside forces, hard to prevent. Even if the banks are safe, the crisis
storm can be so strong that it destroys everything in its wake, leaving little
for financial policy to do except to mitigate the worst.
The most common type of banking crisis is caused simply when banks
make too many bad loans. There is no reason such a crisis couldn’t be pre-
vented—in theory. There are plenty of signs, including excessive buildup
of credit, a boom in real estate, relaxed lending standards, and economic
well-being that is at odds with the economy’s fundamentals. But it is very
hard to do something about it.
Banking crises remind me of the old Road Runner cartoons in which
Wile E. Coyote is chasing the Road Runner and, on occasion, Coyote
runs off a cliff. He keeps running for a few meters until he looks down
and sees there is nothing but air beneath him. Bankers do that too (Fig-
ure 6). Like Wile E. we believe everything is okay until it is too late,
and there’s nothing we can do except manage the crisis. The reason was
voiced best by the former CEO of Citigroup, Chuck Prince, who, when
asked before the 2008 crisis why nobody stopped all the excess, said, “As
long as the music is playing, you’ve got to get up and dance.”3
Booms precede most banking crises. Everybody enjoys the benefits.
The economy is growing. Everybody feels richer. The politicians, policy
makers, and bankers must be geniuses. The financial system tells us what
we want to hear: we are doing the right thing, and we are really, really
smart. All agree until it all goes horribly wrong. Some know better. Ex-
perts in the central banks, inquisitive journalists, and the bankers who
write the crazy loans. But it is not in their interest to issue a warning. If
they do, they risk being denounced, losing their income, and even being
subjected to prosecution. So nobody speaks out, and the party continues.
The former head of the Federal Reserve wrote in the 1950s that the Fed’s
most important job is “to take away the punch bowl just as the party gets
going.”4 It is really hard to do so.
One of the main causes of banking crises is financial liberalization. It
can be enormously valuable for a country to become a global financial
center. The City of London and Amsterdam were the key to the success
of the British and Dutch empires and their prosperity ever since. On the
face of it, liberalization can seem like a sensible idea. Enjoy the fruits of a
financial system that is open for business, welcoming companies from all
over the world. In some countries the attraction is to unshackle lethargic
lion. The financial sector contributed £71.4 billion in the tax take, 11.5 per-
cent of government revenue.
No wonder many countries aspire to do the same. That said, it is not as
easy as it sounds, and the execution has to be just right. A common mis-
take is to reduce oversight and activity restrictions but maintain implicit
or explicit government guarantees such as deposit insurance. This creates
a nasty moral hazard problem because it can enable financial institutions
to borrow cheaply and use the money for high-risk activities, all implic-
itly or explicitly underwritten by the taxpayer. This is what happened in
many Asian countries, such as Thailand and South Korea, that got into a
crisis in 1998. Their banks borrowed abroad, and because of government
guarantees and lax oversight the banks did not care whom they lent the
money to. Not surprisingly, it did not end well. It is much better to learn
from the one country that made a recent success by becoming a financial
center, Luxembourg. I was once in a panel discussion there, along with
the governor of the central bank and other experts. When the governor
learned I was from Iceland, he snickered and said that Icelanders forgot
the first lesson of creating an offshore financial center: protect the coun-
try from financial institutions’ failures.
The typical outcome is that following liberalization, banks overexpand,
artificially inflating asset prices, creating positive feedback loops among
bank lending, market prices, and profits. Meanwhile, the banks lack ex-
perience in managing risk and are disdainful of risk management, seeing
it as a loss-maker that gets in the way of making money. The regulators
are similarly ill-prepared, focusing on the positive outcomes while missing
the signs of excessive risk because they have never been there before. And
the political leadership always keeps the regulators on a short leash, not
allowing anything to spoil the party. Meanwhile, government policies are
accommodating, interest rates are low, and governments cut taxes and
increase expenditure because the economy is booming.
The financial crises in Iceland and Ireland in 2008 had precisely these
roots. Even countries with a culture of a liberalized, competitive financial
system can make the same mistakes, like the United States in the Sav-
ings & Loan (S&L) crisis. Mortgages were the raison d’être of the S&L
banks—banks like that used to be common in many countries, often
under the name of savings banks or something similar. The crisis started
when the sleepy S&Ls found it difficult to cope with all the financial
turmoil in the 1970s. Inflation was high and increasing, interest rates did
not keep up, and banks that focused on collecting deposits and making
mortgages suffered increasingly large losses. The financial authorities de-
cided to deregulate the industry with the view that the S&Ls would grow
their way out of trouble; at the time, deregulation was de rigueur in the
United States. The intention was to allow the S&Ls to expand into the
parts of banking services previously closed to them.
The financial authorities allowed the S&Ls to use lenient accounting
rules, eliminated restrictions on the minimum numbers of stockholders,
and reduced the level of oversight. However, crucial to the story, the
government continued to provide deposit insurance, guaranteeing that
depositors would get their money back if the S&Ls went bust. Many
S&Ls took advantage, often falling into the hands of rogue bankers. The
best known was Charles Keating, who got into the business because “I
know the business inside out, and I always felt that an S&L, if they’d relax
the rules, was the biggest moneymaker in the world.”5 Initially, his S&L,
Lincoln, grew rapidly, but its investments, not least in real estate and
junk bonds, turned out not to be as good as he’d hoped for. Eventually,
Lincoln was closed down, with the cost to taxpayers exceeding $3 bil-
lion. Keating, for his troubles, lost all of his wealth and spent 4½ years
in prison. Eventually, the total cost of resolving all the failed S&Ls was
$160 billion, including $132 billion from taxpayers.
After the United States lost so many of its banks in the Great Depres-
sion, it took Franklin Roosevelt’s election as president to change things.
He pushed Congress to pass the 1933 Banking Act in June 1933, establish-
ing the Federal Deposit Insurance Corporation (FDIC), which provided
insurance coverage for deposits up to $2,500. This proved quite effective,
and the bank runs stopped. Europe had its share of bank runs in the Great
Depression, not as many as the United States, but, unlike those in the US,
the most serious were deliberately caused by the French government. It
all started when the German and Austrian governments announced their
intention to create a customs union in March 1931. The French did not
like that very much and forced their banks to run the Austrian banks—in
effect, run Austria. France could do that because it had been deliberately
undervaluing its currency for quite some time and, as a consequence,
had the biggest gold reserves in the world. The largest bank in Austria,
people deposit money in banks that then make mortgages. Not Northern
Rock. It found a new model (Figure 7). Suppose Northern Rock bor-
rowed £100 million for three months from the wholesale markets, using
the money to make one thousand mortgages. It then bundled the mort-
gages into a structured credit product that it sold to investors and used
the proceeds to repay the three-month £100 million loan. Highly profit-
able. But there was hidden liquidity risk.
All was fine so long as Northern Rock was able to sell the mortgages—
if it could not, the bank would be forced to default on the initial three-
month loan, which eventually happened in the summer of 2007. It wasn’t
Northern Rock’s fault when the credit markets worldwide froze; it was
said that investors went on strike, just like their Amsterdam ancestors
back in 1763. Nobody wanted to buy structured credit products, and the
first victim was Northern Rock. It took a few months, but its impend-
ing demise was known to everybody in the credit markets except, as it
appears, the bank’s regulators: the Financial Services Authority and the
Bank of England. Eventually, the authorities tried to resolve the crisis
behind the scenes.
The Bank of England then made an unfortunate mistake, announcing
in October 2007 that Northern Rock was in difficulty and receiving sup-
port from the Bank of England. The Bank’s decision makers seem to have
believed that the public would find the announcement reassuring—all is
fine, no reason to panic, we know what we are doing and are here to pro-
tect you. It didn’t quite work out that way, and the following day people
around the United Kingdom queued up to get their money out of North-
ern Rock—the first bank run in Britain for a century and a half. The Bank
of England should have known better. The same mistakes happened with
the Reconstruction Finance Corporation in the Great Depression and
many times before and since. We have to trust the banks, regulators, and
the government. Unfortunately, by the time the crisis comes along, the
trust has long evaporated. Not surprisingly, Northern Rock’s retail clients
called the bluff.
Northern Rock went through two waves of bank runs, first by the so-
phisticated wholesale investors in July 2007 and then by unsophisticated
retail investors in October. The wholesale run shows that the financial
markets had a much better understanding of the bank’s problems than
either the supervisor or the general public. One reason for the October
run was that the British deposit insurance scheme was relatively weak, and
depositors felt they had no choice but to queue up and take their money
out. After the first £2,000 of deposits, the scheme protected only 90 per-
cent of savings of up to £33,000—guaranteeing a maximum payout of
£31,700. Even worse, it would take several months to get the money. This
so-called coinsurance was intended to incentivize depositors to monitor
the banks. Well, given that the Financial Services Authority missed the
problems at Northern Rock, it would be surprising if ordinary deposi-
tors with less information could do better. The only sensible strategy for
depositors was to run the bank. Contrast this with the American deposit
insurance scheme and my friend’s experience in Houston. The Northern
Rock run would not have happened there.
With the benefit of hindsight, it is clear that Northern Rock’s failure
was inevitable given time. It remained only solvent so long as it could
borrow without limits. Because there were no limits on borrowing,
Northern Rock and most other banks in the world behaved in a way that
ensured that credit would dry up. The failure of Northern Rock was a
self-fulfilling prophecy.
In response, the British authorities overreacted, announcing unlimited
deposit insurance to reassure bank clients and prevent contagious runs
across the United Kingdom. This demonstrates the problem created when
the authorities have inadequate policies in place and then are confronted
pean sovereign debt was risk free, that somehow it was all insured by the
collective. This rather unfortunate misunderstanding was reinforced by
law in a European Union directive stipulating that sovereign debt was,
and is, recorded as risk free in banks’ accounts. The European Commis-
sion told the Cypriot banks that they were to consider Greek government
bonds safe on their books, and that is where they put their money. Think
about it for a second. Sovereign debt is, in fact, risky, and governments
are free to default on it, but the banks buying the debt are obliged to
consider the debt entirely safe. Talk about mixed signals.
When the crisis started, 37 percent of the Cypriot banks’ deposits be-
longed to foreigners, 80 percent of whom were Russians. Then the Greek
sovereign debt crisis happened, and in the second bailout, in 2011, the
owners of Greek sovereign debt were made to suffer a 50 percent haircut.
At that moment it became clear that the Cypriot banks were the walking
dead, so a slow-running but accelerating bank run ensued. As so often is
the case, those in the know took their money out first. It didn’t happen
quickly: Greece defaulted in early summer 2011, and it wasn’t until early
spring 2012 that the Cypriot banks finally started to run out of money.
The Cypriot government refused to recognize the problem at first, and
the Troika did not press the issue. A crisis was clearly about to happen,
and nobody did anything. Not the Cypriot government, not the IMF,
not the European authorities. Still, that is not the dumbest mistake they
made, as a dumber one was to follow.
Now it became really interesting. The typical way to resolve a failing
bank is to let junior creditors take the first loss. However, the Cypriot
banks didn’t have any bondholders—all they had was depositors. That
left the government in a quandary. Whom to hit for the money? Un-
der European regulations, depositors with less than €100,000 in a bank
are fully insured. In the crisis meeting with the Troika and Ecofin, the
Cypriot authorities maintained they did not want to excessively hurt the
foreign depositors, so key to the country’s business model, one from
which the political and regulatory leadership personally profited. Instead,
they opted to hit all depositors with a 6.75 percent tax, including insured
deposits.
Why would they do such a thing? So that offshore clients would not
suffer, as otherwise they might leave and never come back. Why would the
European authorities agree? The best explanation is that the emergency
meeting was held on 25 March 2013, and by four in the morning, when
the decision was made, the policy makers were really tired. When the
press release eventually came a few hours later, it was pointed out that
insured depositors throughout Europe would no longer think they were
fully protected, so everybody would rush to run the banks when the next
crisis came along. Not surprisingly, the authorities quickly backtracked.
It was perhaps the dumbest policy blunder in the whole crisis. Given that
everybody knew for almost a year that the banks were failing and that
the importance of confidence in preventing bank runs is well known, it is
incomprehensible that no authority, neither Cypriot, European, nor the
IMF, prepared for what was an entirely foreseeable and avoidable crisis.
Banks and governments have a symbiotic relationship. Banks are always
encouraged (and usually required) to buy government debt. Banks are
also the primary source of financing for small and medium-sized compa-
nies, the main driver of economic prosperity, and a source of significant
tax revenues. In good times this relationship is virtuous. Profitable bank-
ing and increasing risk go hand in hand with growth and rising govern-
ment revenues. This is when governments should pay down their sover-
eign debt, the one saving grace of the Irish and Icelandic economic policy
before their 2008 crisis. If only Britain and the United States had done
the same thing.
This virtuous cycle can quickly turn vicious, and it can be the fault of
the banks, the governments, or both. A crisis in the banking sector will
put government finances under considerable strain: directly because of
the cost of providing bailouts and indirectly since banks slow down their
financing of the economy. If things get especially bad, a banking crisis can
culminate in a sovereign debt crisis, where the government cannot meet
its obligations. That is what happened in Ireland and nearly in Spain and
Iceland too.
Similarly, a government facing financial difficulties can cause problems
for the banking system. It may need to increase taxes or spend less, which
then slows down the economy. A more direct channel is when banks
own a lot of government debt. If the government’s credit rating gets
downgraded, the banks’ holdings of sovereign debt are immediately and
adversely affected, so their riskiness goes up. Governments know this, and
most have a law saying that government debt is risk free. However, that
estimate output losses by extrapolating from the trend of real GDP be-
fore a crisis and calculating the difference between the trend prediction
and actual outcomes. We can see the direct and indirect costs in the IMF
crisis database. The costliest crisis was Indonesia’s in 1998, with direct
costs of 57 percent of GDP, and the highest indirect costs happened in
the Finnish 1991 crisis, at 75 percent of GDP. However, while the direct
impacts are fairly accurately measured, the indirect impacts are likely to be
overstated. The reason is that economic growth tends to be abnormally
high in the years before the crisis because of the financial excesses that
culminated in the crisis. That is why many commentators on the 2008
crisis overstate their case. The real GDP in the United Kingdom is pre-
sented on the accompanying graph (Figure 8). If one directly extrapolates
from economic growth before the crisis, the indirect cost would be £212
billion. However, if a more prudent policy path would have led to lower
growth, there actually might have been a gain from the combined precri-
sis excess and postcrisis decline. There is no right way to identify which
outcome is more likely.
The IMF crisis database was initially developed by the World Bank, and
some of the early developers of the database, including Patrick Honohan
and Daniela Klingebiel, published an interesting paper focusing on the
fiscal costs of banking crises and how the various government responses
contribute to the cost or resolution. They find that if countries do not
extend some policies of unlimited deposit guarantees, open-ended liquid-
ity support, repeated recapitalizations, debtor bailouts, and regulatory
failing bank, separating the dodgy assets into one institution (the bad
bank) while keeping most of its operations and solid assets in the good
bank. The bad bank becomes an asset management firm; the government
aims to sell the good bank but hold on to the bad assets until they expire.
If the dodgy assets are valued cheaply during the crisis when the bad bank
is established, as they tend to be, the government potentially can make
significant profits, an argument often used to justify this approach to tax-
payers. However, if the bank was insolvent, the bad bank must, by defini-
tion, have a negative value, so a profit for the government is not the ex-
pected outcome. Realistically, the taxpayers should expect to lose money,
but hopefully the benefits from having a well-functioning bank replacing a
failing bank outweigh the expected loss. The Swedes eventually kept their
crisis costs to 4.3 percent of GDP, a fraction of what was initially feared.
It is difficult to implement such ruthless efficiency because all the spe-
cial interests pull it in different directions—everybody wants a bailout.
Most governments find it hard to follow the Swedish example, and two
factors determine whether they can. The first is how much faith we have
in the government’s ability to implement sensible policies that benefit
us in the long run. Do we trust the government to do the right thing
or see it as incompetent or corrupt, its every initiative to be resisted, no
matter how sensible they are. Second is the degree of democracy, that is,
whether the government has to take the will of the people into account.
In democracies where the trust in government is low, like Italy, it is dif-
ficult to deal with banking crises, while the Swedish government was able
to make difficult decisions in the 1990s because the Swedes trusted their
government.
The trade-off can be easily depicted (Figure 9). The financial authori-
ties need to be competent, determined, and given sufficient political cover
to deal efficiently with a financial crisis. Muddling through and wishing
the problem would go away are more common, the end result often be-
ing zombie banks. That is when banks are insolvent, but the governments
cannot close them down, like parts of the Italian banking system today.
The term “zombie bank” was first used when discussing the S&L crisis in
the 1980s, but it became widespread when referring to the Japanese crisis
in the 1990s. Instead of shutting down or recapitalizing its failed banks,
the Japanese government kept them on life support. Banks were allowed
to—even expected to—keep nonperforming loans on their books as if
they were performing, that is, lending to failed borrowers, just so they
could repay old loans: a tactic known as evergreening. This steadily weak-
ened the banks and denied credit to the better-performing parts of the
private sector. The Japanese authorities’ inability to effectively deal with
their banking crisis is one of the main reasons the Japanese economy has
been stagnating ever since.
The contrast between the Swedish and the Japanese crises shows what
the authorities should do: prevent the emergence of zombie banks at all
costs; restructure or shut down failed banks as quickly as possible; and
let the banks’ creditors and shareholders take the losses. To put it simply,
the two main objectives of banking crisis resolution are for the banks to
continue serving the real economy and minimize the cost to the taxpayer.
That is what the Swedes did better than anybody and that explains why
their resolution of the 1992 crisis remains the gold standard.
In the 2008 crisis and the subsequent European crisis, some govern-
ments, including those of the United States, Switzerland, and the United
Kingdom, actively tried to prevent the emergence of zombie banks.
Other European countries, like France, Italy, and Germany, have found
that more difficult, lobbying in the G20 for lenient regulations. That re-
flects their banks’ weakness, that is, why they are becoming ever more un-
competitive. Continental European banks are in retreat, American banks
in ascendance. The worst problems are in Italy and other Mediterranean
countries, which have a large zombie banking sector they find difficult to
deal with.
Banking crises are not complicated. They have plagued us for centuries,
and we know why they happen and how to prevent them. Still, they are
depressingly frequent. Even when there are clear signs of excesses that in
all likelihood will lead to a crisis, there is little willingness to do something
about it. We enjoy the party too much. What we can do is monitor the
banks more closely and force them to measure and manage risk prop-
erly. A vast edifice has been created for precisely this purpose. Still, crises
happen.
A few years ago, when taking a London black cab, I saw the cab-
bie reading one of my favorite books on risk, Peter Bernstein’s Against the
Gods. We started chatting, and I mentioned the book I was then writing
(what you are now reading). He then told me a great story about why it
is difficult to control risk. He had just made a trip to Canary Wharf (the
London financial district) and got stopped in a security check. My driver
was really surprised when he saw that the only cars being searched were
London black cabs. Upon asking the security guard why he got the an-
swer: “Because I have a family.” The point is that the most unlikely type
of terrorist in London is a driver of a black cab, so if one wants to avoid
51
Okay, I once played football on the M25 motorway when stuck because
of an accident. While many things could account for the difference, I sus-
pect a major one is the intensity of supervision, the enforcement of traffic
laws. India and Britain have similar regulations (traffic laws) but different
supervision (enforcement of traffic laws). Without efficient enforcement,
the regulations become meaningless.
Financial regulations are broadly similar in most countries in the world.
They tend to take the lead from G20, the group of the world’s largest
economies that represents about 90 percent of global GDP; for example,
the Basel banking regulations, securities regulations, and insurance regu-
lations. So, regulations are broadly similar. It’s different with supervision.
Countries interpret the regulations differently, many authorities do a de-
cent job, some are incompetent, others have been captured, and they are
often too underfunded to do the job properly. What is really needed for
effective supervision is the risk Panopticon.
The intellectual origin of the risk Panopticon is the work of the
seventeenth-century English philosopher Jeremy Bentham. It all started
when he visited his brother, who was working in Russia. What interested
Jeremy was that his brother had come up with an ingenious way to man-
age his unskilled employees. He set up observation posts in the middle of
his workshops, allowing inspectors to monitor the workers without being
seen. Jeremy thought this a splendid idea and proposed using it for other
human activities such as schools, factories, and even hospitals. It is his
proposal for prisons that got the most traction, the Panopticon, from the
Greek παν (“all”) and οπτικος (“seeing”). If you find yourself in London,
you can go see Jeremy Bentham’s body, still on display at the University
College London.
In the Panopticon the prison cells are located in buildings that sur-
round a central monitoring tower, allowing the prison guards to observe
the prisoners without being seen themselves. Because the prisoners never
know if they are being observed, they assume they are and thus behave
properly, allowing a small number of prison guards to keep watch over
many prisoners. Not too different from all the CCTVs installed in our
cities. The only remaining job is to prevent the prison guards from abus-
ing the convicts, accomplished by allowing the public to inspect the jails
any time they want.
I once saw the power of the Panopticon in action while driving from
Spain to Portugal. The traffic laws are the same on both sides of the bor-
der, and the speed limit on the motorway was 120km/h. On the Spanish
side everybody drove at 120 or below. As soon as they crossed the border,
their speed increased to an average of 150. The reason is simple: the Span-
ish traffic police had unmarked cars checking speed while the Portuguese
police did not. The experiment’s simplicity rules out any other factors; it
involves the same drivers and the same cars. They did not start disregard-
ing their personal safety or acquire better cars upon crossing the border.
The only thing that changed their behavior was the chance of getting a
ticket on the Spanish side.
The one area where the Panopticon view of regulation has been par-
ticularly successful is in public transport. Most bus systems are based on
users’ contactless cards, so one is left with two options: either buy a ticket
and travel legally or commit fraud and hope to get off the bus without
being caught. What determines one’s choice? Morality and individual at-
titude to risk. Getting caught can be painful. Just ask Jonathan Burrows,
a lapsed fund manager at BlackRock, caught in November 2013 going
through a London Underground ticket gate, not having paid the right
amount. He admitted to regularly traveling without the right ticket, and
the train company estimated the amount of money he had dodged at
£43,000. After he was caught he did not inform his employer as he was
supposed to. The relevant financial regulator, the Financial Conduct Au-
thority (FCA), took a dim view and declared him “not fit and proper,”
ending his twenty-year career in finance. He did not need to avoid the
£21.50-a-day fee, as, by all accounts, he was well compensated by Black-
Rock and owned two mansions worth £4 million.
Burrows was just exploiting a loophole. His local station, Stonegate,
an hour and twenty-two minutes from his final stop in the City of Lon-
don, is rural and has no ticket barriers. So he took advantage, tapping his
Oyster Card only when arriving. This meant that he paid only the price
of the Underground, not the full fare from Stonegate to the London Un-
derground stop. So why would the FCA care about a simple fare fraud?
Because Burrows was in a position of trust at BlackRock, so how could
he be trusted with other peoples’ money if he so casually committed
petty fraud? What I find surprising is, why bother? I have my Oyster Card
checked a few times a year, and the fine if caught does not seem to be
worth the benefit of cheating, not to mention losing a glittering career as
a BlackRock fund manager.
Modern financial regulations very much fit into this Benthamite view
of the world. Financial institutions release large amounts of information
to the public authorities in the full knowledge that the authorities can
look only at a small fraction of all that information. But the banks don’t
know where the authorities are looking, and, even more important, how
information released today will be used in future investigations. The reg-
ulators can inspect the banks without being seen. The problem is that
there can be too much of a good thing, including too much data. So, how
well does the panopticon work for the financial supervisors? It depends
on the objective. It might help the micro regulators focused on individual
banks, protecting individual bank clients. However, what it misses is the
risk that arises from market participants’ interactions in times of stress—
most important, systemic risk.
than regulating other human endeavors, like traffic. We have recently seen
plenty of banking crises and abuse, and it is not possible to make a cred-
ible case that the banking system, unlike traffic, is better behaved now
than it was in the past.
Why is regulating banks so much harder than regulating traffic? The
problem is that the financial system is one of the most complex of all hu-
man constructs, and the financial entities have strong incentives to mis-
behave in a way that is undetected. The private incentives of bankers are
not well aligned with the interest of society at large. There is little down-
side for bank employees working with other peoples’ money (dismissal,
at worst), but they can enjoy a significant upside in terms of high salaries
and bonuses. Hence bankers have an incentive to take more risks than
desired by either their employers, clients, shareholders, or society. Banks
have significant advantages over their clients. They sell sophisticated fi-
nancial products to people who probably have only a very rudimentary
knowledge of finance, perhaps not even understanding basic percentages
or present value calculations. An OECD study found that 40 percent of
the population does not understand diversification, and only 27 percent
are able to both calculate simple interest and recognize the added benefit
of compounding over five years.
Given a chance, banks certainly don’t hesitate to abuse their clients:
witness the Wells Fargo account fraud scandal. The abuse of clients and
financial crises happen all too frequently. What can we do? Punish the
bankers severely if they misbehave? Claw back their bonuses or even put
them in jail? Perhaps even reintroduce the pillory, appropriately on the
square in front of the Bank of England. What better place to punish all
the failed bankers?
We did try more strict punishments in the olden days. In the thirteenth
century one of the major financial centers in Europe was Barcelona, as
told by Meir Kohn in his 1999 article “Early Deposit Banking.” Banks
were tightly regulated. They were forced to have substantial capital and
were required to pay cash within twenty-four hours of demand. If banks
failed, the owners got into serious trouble, both with the Almighty and
the city authorities. Francesch Castello was beheaded in front of his bank
in 1360, yet even the threat of decapitation did not stop his fellow bankers
from misbehaving.
Given how the financial system is set up and what drives its cycles,
banks can and will end up taking too much risk. Individual bankers take
excessive risk again and again, and we will go through repeated boom-to-
bust cycles. It is really hard to effectively prevent the worst outcomes with
regulations, but there is a right way and the wrong way to regulate banks.
Punishing bankers or even executing them as the regulators did to poor
Francesch Castello is unlikely to work. We need something else.
Bank Capital
The primary tool for controlling how banks behave is bank capi-
tal. Bank capital has two purposes. The first is a buffer so that if a bank gets
into difficulty, it has some reserves to draw on before hitting bankruptcy.
The main reason for capital is, however, limits to leverage. The more capi-
tal a bank has to hold, for the same amount of assets, the less levered it is,
and hence safer. The concept of bank capital is quite confusing, especially
for those trained in accounting, economics, or the law. The most com-
mon usage follows Adam Smith, who defines capital in The Wealth of Na-
tions (1776) as “that part of a man’s stock which he expects to afford him
revenue.” To Karl Marx, in Das Kapital, capital is more nefarious: wealth
that is used to create more wealth, something that exists only because of
economic exchange or the circulation of money. Modern usage follows
both Smith and Marx and is often quite contradictory. We have concepts
like capitalization—the market value of a corporation—and economists
talk about capital as one of the two main inputs in production, the other
being labor. One can also find capital to be the net market value of a firm
after all its debts have been subtracted.
A few years ago the university lecturers’ union in the United Kingdom
was protesting in front of the main entrance of the London School of
Economics against the evils of capitalism. As I walked through the door,
crossing the picket line, the protesters asked me if I was a capitalist, as if
that were a horrible thing. I answered, “Yes” and added, “So are you.”
They disliked that and demanded an explanation. I responded by ask-
ing them if they belonged to the university pension fund. They said yes.
Well, the pension fund owns stock in companies, and the definition of
a capitalist, according to Karl Marx, is someone who owns the means
is right, this is the proper debate to have because it frames the discussion
in terms of what we want from banks, not in a more narrow sense of just
wanting banks to be safe.
Before I leave capital, I want to make two points, one technical and one
philosophical (if you want more details, you are welcome to download
my slides on regulations and capital).2 The technical point is that capital
is made up of multiple parts. At the most basic level, we have common
equity. We then have additional bits that are called Tier 2 and Tier 1 capi-
tal. Add to that a buffer for banks in case they run into difficulty, called
the capital conservation buffer, and another for countries in trouble, the
countercyclical capital buffer. Finally, a special buffer for systemically im-
portant banks. I used all of this buffer complexity to good effect in my
exam in 2020, asking which part of capital requirements would be relaxed
to help banks so they could help with the Covid-19 recovery.
The philosophical point is that even though it is often said to be a
buffer that protects when things go sour, that is only partially true. The
reason is that banks are required to hold minimum capital, which they are
unable to reduce in bad times. As noted by Charles Goodhart, a capital
buffer that can’t be used isn’t much of a buffer: “The weary traveler . . .
arrives at the railway station late at night, and, to his delight, sees a taxi
there who could take him to his distant destination. He hails the taxi, but
the taxi driver replies that he cannot take him, since local bylaws require
that there must always be one taxi standing ready at the station.”3
Schrödinger’s Bank
Financial regulations used to be something each country set for
itself. There were some agreements between countries but not many in-
ternational standards. This was fine so long as the global financial system
remained fragmented or heavily regulated, as it was under the post–World
War II Bretton Woods system, when banks were by and large confined to
their home countries and not allowed to operate across borders.
This state of affairs changed when the Bretton Woods system collapsed
in 1972, bringing a philosophical change in attitudes toward global fi-
nance: the Washington Consensus. From then on, national frontiers con-
tinued to open up to global finance. Banks could operate across borders,
embarrassed that a bank on their watch would fail. Whatever the reason,
the German financial regulatory system was reformed soon after.
When the German bank Wirecard failed in the summer of 2020, the cir-
cumstances were eerily similar to those of Herstatt half a century earlier.
The German regulators of Wirecard had been aware of some problems
but failed to act. The head German regulator said in a testimony to the
German parliament, Bundestag, that the German regulatory system func-
tioned well in normal circumstances but failed in a crisis.
The Herstatt bankruptcy demonstrated a weakness of the settlement
system, until then not recognized, and served as a wake-up call to the
global financial authorities. They might want to be purely domestic,
managing their banks with minimum international coordination, but the
banks don’t play ball. They are international—and that requires interna-
tional coordination.
It wasn’t really until the case of Banco Ambrosiano eight years later
that the case was forcefully made. Ambrosiano was the largest private
banking group in Italy, with operations in fifteen countries. At the center
of the bank’s failure was its chairman, Roberto Calvi, called God’s banker
by the Italian press due to his close association with the Holy See. Calvi
was determined to transform his bank from a relatively small regional
bank with strong religious overtones into a major international financial
institution. One of his initial steps was to form a holding company in
Luxembourg, which was not subject to Italy’s banking regulations.
Calvi’s problems began in 1978 when the Bank of Italy conducted an
extensive audit of his financial empire, noting unorthodox operations in-
volving $1.2 billion in unsecured borrowings. Ambrosiano was buying up
its stock by using dollars, an illegal operation according to Italian bank-
ing regulations. Ambrosiano collapsed for quite predictable reasons: the
Italian lira fell relative to the dollar, so the lira amount of its liabilities
sharply increased, while the value of its Italian assets stayed constant in lira
terms. Calvi was sentenced to four years in jail but released pending ap-
peal. He fled Italy but was found hanging under the Blackfriars Bridge in
London in 1982. To this day, it’s not know whether he committed suicide
or was killed, a topic that continues to arouse controversy. I cross Black-
friars Bridge on my way to work every day and have frequently spared a
thought for poor Calvi. The failure of Banco Ambrosiano left more than
from the Bank of England in 1997, creating the Financial Services Au-
thority (FSA). And since Britain is generally seen as a leader in regulatory
methodology, many countries followed suit.
Was that a good idea? The global crisis in 2008 showed what can happen
if the central banks are not in charge of supervision: they lack oversight,
don’t know what is happening on the ground, don’t have the necessary
expertise, and can refuse responsibility. The British authorities eventually
split the FSA into the independent Financial Conduct Authority (FCA)
and the Prudential Regulation Authority (PRA), which was remerged
with the Bank of England. Other countries are now following suit. I am
curious to see what the British authorities will do after the next crisis.
Will the PRA be remerged with the FCA or will the FCA rejoin the Bank
of England?
The Basel committee is hosted at but is distinct from the Bank for
International Settlements, whose head office is in Basel, Switzerland. It
does not possess any formal powers, acting instead as a vehicle for seeking
agreement on common standards but leaving implementation to member
countries. The Basel committee’s limited membership used to be a source
of frustration for the rest of the world, as many countries have felt they had
no choice but to implement Basel-style regulations without the ability to
influence them. This aggravation has now been remedied, as following the
2008 crisis the committee expanded to include twenty-eight jurisdictions.
The Basel committee reports to the G20 group of countries, as do
other parts of the international financial architecture. The most impor-
tant part of the committee’s work is the Basel Capital Accords, a set of
rules for determining how much capital a bank should have. The first
Basel Accord, now referred to as Basel I, was decided in 1988 and imple-
mented in 1992. The successor, Basel II, was proposed in the late 1990s
and partially implemented from 2008.
Why did the Basel committee decide to harmonize bank capital rules?
It goes back to the early 1980s, when bank capital ratios of major banks in
Europe and the United States were perhaps 8 percent to 10 percent, while
the Japanese banks operated with much lower capital, around 4 percent,
giving them a competitive advantage. The lower the capital ratio, the
lower the cost of lending. Not surprisingly, the Japanese banks became
a significant presence in the European and American corporate lending
markets, elbowing out the local banks. Understandably upset, the Euro-
pean and American banks pushed for their capital ratios to be lowered to
match those of the Japanese. The regulators didn’t like that very much
and instead forced the Japanese to raise their capital standards to the Eu-
ropean and American levels.
When I spent a few months at the Bank of Japan in 2000, some of my
Japanese colleagues called Basel I the anti-Japanese conspiracy, blaming it
for the collapse of the Japanese banking system in the early 1990s and the
subsequent recession. There is a lot of truth in that. Having to double the
capital ratio within a few years proved quite difficult and certainly contrib-
uted to the Japanese banking crisis.
Basel I succeeded in achieving its intended purposes, raising capital
levels when they were low and trending down. Still, the accord wasn’t
perfect, focusing on credit risk with crude risk weights. If a bank lent
$1 billion to an AAA-rated firm, like Apple, it had to hold $80 million in
capital. If it lent the same amount to an OECD government, like Turkey,
Japan, Mexico, or Greece, no capital was needed. It is easy to see why
governments liked this arrangement. It made loans to companies and in-
dividuals more expensive than loans to governments, subsidizing govern-
ment borrowing at the private sector’s expense. Financial repression, the
economists call it.
Long before the 2008 crisis, and anticipating some of the things that
would eventually go wrong, the Basel committee embarked on a revision
of Basel I in the 1990s, what came to be known as Basel II. It did away
with the zero-risk weighting for the OECD government debt, but the
European Union didn’t like this move very much and passed a directive
saying that all member states’ sovereign debt was risk free. While subsidiz-
ing government borrowing in Europe, the directive also helped Greece to
borrow itself into default and was the leading cause of the Cypriot crisis
in 2012 (see chapter 3). While Basel II was announced at the turn of the
century, the extensive lobbying that followed delayed its implementation
until 2008. This meant that Basel II reflected the regulatory concerns and
technological developments of the mid-1990s, so it was already out of
date by the time of implementation, a common problem in international
financial regulations.
I was always of two minds about Basel II. It was a big step toward
making risk management and especially financial regulations scientific.
But what bothers me most is the belief that risk can be measured and
controlled accurately. While that might be a reasonable (or at least accept-
able) assumption in the internal management of individual risks, it is a big
leap to apply it to risk control for an entire financial institution, not to
mention the entire financial system. The Basel II proposals inspired me to
write my favorite paper, eventually published in 2002 under the title “The
Emperor Has No Clothes: Limits to Risk Modeling,” where I argued that
risk was not all that well measured.
When Basel II was announced I got together with several LSE col-
leagues, including Charles Goodhart and Hyun Song Shin, to write an
official comment, titled An Academic Critique to Basel II, in response to
a call for comments on the initial Basel II proposals. We certainly were
skeptical: “Heavy reliance on credit rating agencies for the standard ap-
proach to credit risk is misguided as they have been shown to provide
conflicting and inconsistent forecasts of individual clients’ creditworthi-
ness. They are unregulated, and the quality of their risk estimates is largely
unobservable.” One of the leading causes of the 2008 crisis was structured
credit products composed of subprime mortgages that could not have
been created without the credit-rating agencies. As it turned out, the
ratings were abysmal, demonstrating the folly of relying on the rating
agencies for financial regulations: “Statistical models used for forecasting
risk have been proven to give inconsistent and biased forecasts. The Ba-
sel committee has chosen poor quality measures of risk when better risk
measures are available.”
When the Basel II regulations were initially proposed in the mid-1990s
there was little understanding of the underlying risk-measurement tech-
niques’ reliability. By the time the proposals came out, that had changed.
The low-quality methods chosen by the Basel committee significantly
contributed to the regulators’ and the banks’ improper appreciation of
financial risk before 2007: “The proposed regulations fail to consider the
fact that risk is endogenous. Value-at-Risk can destabilize and induce
crashes when they would not otherwise occur. Financial regulation is in-
herently procyclical. Our view is that this set of proposals will, overall,
exacerbate this tendency significantly. Insofar as the purpose of financial
regulation is to reduce the likelihood of systemic crises, these proposals
will actually tend to negate, not promote this useful purpose.”
The statement describes exactly what happened in the years before
2008. As banks were implementing Basel II in the early 2000s, the risk
management methodologies they were required to implement told them
that risk was low and therefore it was perfectly acceptable, even expected,
to take more risk. By not recognizing the consequent risk, the Basel II
regulations amplified the financial cycle, helping to create the right condi-
tions for the 2008 crisis.
it is the most interesting from the perspective of risk. Every tall build-
ing has to contend with the elements—in Taipei 101 earthquakes and
typhoons—so risk management is critical to prevent a catastrophe. The
devices used to manage the risk are usually hidden, but not in Taipei 101.
It uses a 728-ton gold-painted orb that is open to the public as a counter-
weight that swings like a pendulum at the top of the building. A risk man-
agement system that has become a tourist attraction in its own right. The
engineers even hired the Sanrio Company, the creators of Hello Kitty, to
create the “Damper Babies,” cute figurines representing the risk manage-
ment system.
Taipei 101 was tested in August 2015 when it was hit by Typhoon
Soudelor, with winds of 145 mph. The golden orb swung more than a
meter from its regular position, but nothing happened to the building,
a monument to the engineers’ calculations. If structural engineers can
create risk management systems that protect tall buildings like Taipei 101,
why can’t the financial engineers working in the regulatory agencies pro-
tect us from financial crises? Because of a crucial difference between the
risks in these two disciplines. The reason the job of the civil engineers is
relatively straightforward is that they can ignore the human element. If
they calculate that a wall one meter thick is expected to collapse every five
hundred years, nature couldn’t care less. Risk is exogenous.
In finance, nature is not neutral, it is malevolent, and risk is endog-
enous. The reason is that all rules and regulations change behavior and
outcomes. Human beings, being human, don’t just naively comply—they
change their behavior in response. Immediately after a regulator comes
up with rules, perhaps for determining bank capital, the bankers look for
a way around the rules. They will try their best to make the capital appear
to be very high to the outside world while making it as low as possible in
practice. The technical name for this is capital structure arbitrage. Many
of the banks that failed in 2008 had some of the highest levels of capital
going, but that capital turned out to be illusionary. There is always a
cat-and-mouse game going on between the authorities and the banks.
While the bankers may pore over stacks of regulations to comply, they are
much more enthusiastic about finding loopholes. Regulations are inher-
ently backward looking and change at a glacial pace, giving fast-moving
and forward-looking bankers ample room to look for places to take risk
where the authorities are not looking. Because the financial system is al-
most infinitely complex, it is technically impossible to regulate but a tiny
part of it, leaving plenty of room for misbehavior.
Regulations may end up causing profitable activities to move to the
shadow parallel banking system or abroad. A good example is Regulation
Q in the United States from the 1960s and 1970s, which limited the inter-
est banks could pay on deposits, the idea being that high interest rates
were inflationary (this erroneous notion has recently resurfaced in Tur-
key). Regulation Q simply caused deposits to move to the parallel bank-
ing system—the money market funds—where market interest rates could
be paid. Regulation Q was abolished a long time ago, but the money
market mutual funds continue to flourish and have become a significant
contributor to systemic risk in the United States. Financial activities can
also shift abroad, as when the Eurodollar market first came into being in
the 1950s, when the Soviet Union’s oil revenue—all in US dollars—was
deposited outside the United States for fear of being frozen by US regula-
tors. This resulted in a vast offshore pool of dollars outside the control of
US authorities, primarily in Europe, hence the term “Eurodollar,” help-
ing London become a world-leading financial center again.
When we regulate the financial system we often drive risk-taking be-
havior away from the spotlight and deep into the shadows, where it is
much harder to detect. The tequila crisis in Mexico in 1993 happened
because the Mexican banks were borrowing US dollars in New York to
lend to Mexican borrowers in pesos. Because the Mexican banks took
the currency risk, the Mexican authorities were justifiably concerned and
forbade this. The Mexican banks found it easy to get around the rules by
creating derivative transactions with the New York banks—Tesobonos.
Because the Mexican central bank, Banco de México, did not see these
transactions, it did not realize what was happening and could not step in
to prevent the crisis until it was too late. It is much better if risk-taking is
visible rather than hidden.
Finally, and even more insidiously, precisely because the financial au-
thority is trying to reduce financial risk, banks have an extra incentive
to take on more risk—an example of the Minsky effect. That perverse
outcome happens because of the way we measure risk. If all looks nice
and stable, the road is smooth—the great moderation before the 2008
started in 2007, the conduits could not roll over their short-term loans
and hence called on the sponsor, IKB, for help. Unfortunately, IKB did
not have the money needed. Facing bankruptcy, it was bailed out at the
cost of €9 billion to the German taxpayers.
IKB made a name for itself the following year by transferring money
to Lehmans a few hours after the latter defaulted. Apparently, IKB had
automatic rules in place for transferring money to Lehmans on Monday
mornings, and even if the entire world knew by Friday that Lehmans
was about to fail, and it had actually failed right after midnight Monday
morning, IKB didn’t think to cancel its transfer.
In the Middle Ages maps sometimes came with the warning hic
sunt dracones, here be dragons (Figure 11). Where the mapmakers had
little information, they warned travelers of unexpected dangers, maybe
dragons. The modern riskometer should come with the same warning.
There are many ways to measure financial risk. The most accurate
is to study the deep structure of the financial system, identify all the
72
interlinkages and the hidden corners where risk is taken. That is difficult
and costly, and it is much more common to use a purely statistical ap-
proach, what I called a riskometer in a blog piece I wrote for VoxEU
.org titled “The Myth of the Riskometer.” It is a useful little thing, the
riskometer. I can plunge it deep into the bowels of Wall Street and it pops
out an accurate measurement of risk.
Riskometers are used all over. Anywhere from someone investing their
own money to risk managers controlling proprietary traders to a bank de-
termining the amount of capital it holds. Financial regulators concerned
with the stability of the entire financial system use one too. The riskome-
ters promise to distill the risk of entire financial institutions into one
number. It is really useful to have a single, unambiguous measurement
of risk. A number with all sorts of caveats is not nearly as helpful. The
decision makers, the people who run banks, and the regulatory agencies
are just like President Harry S. Truman, who is widely credited as having
demanded, “Give me a one-handed economist. All my economists say ‘on
the one hand . . . ,’ then ‘but on the other.’”
The prominence of riskometers is increasing by the day, and for a good
reason. They are cheap, quick, and objective—scientific, really. The alter-
natives are subjective, slow, and expensive. In the scientific world of risk,
where we have almost limitless data, sophisticated statistical methods,
and all the processing power one could want, how can the riskometer
not be the best way to measure risk? The problem is that riskometers can
capture only a caricatured view of risk. Any particular implementation
will focus on and often exaggerate some aspects of risk and ignore others.
That means riskometers are not nearly as accurate as most of us think,
most notably the senior decision makers. Those who are actually on the
ground, designing the riskometers and reporting risk to their superiors,
know better.
There is nothing new about riskometers, and we have been searching
for devices for measuring risk since the beginning of time. The earliest ex-
ample I know of is the riskometer the Chinese astronomer Zhang Heng
invented in AD 132 during the Han Dynasty. It is a metal urn containing
a bronze ball and eight dragon heads for each direction on the compass.
When an earthquake hit, the ball would roll onto the dragon head cor-
responding to the earthquake’s direction. The government was advised
The latter two make up the concept of a riskometer, the choice of which
intimately depends on what the riskometer is needed for, the first layer.
To start with, what do I want? Suppose Paul, Ann, and Mary each in-
vest in Google stock. Their reasons for investing are different. Paul trades
on his own account, his reasons are speculative, and he aims to get out
in one week with a hefty profit. Ann is a fund manager, investing on be-
half of her bank, and her primary concern is beating her benchmark and
avoiding significant losses relative to her benchmark that would get her
fired. Mary is investing for the long term and worries about getting her
pension seventy years into the future when she is ninety-five years old and
relying on the financial industry for her material well-being.
Even though each of the three made precisely the same investment and
had access to the very same risk measurement technologies, their views on
risk are very different. Paul cares about day-to-day fluctuations over the
next week, Ann is worried about a substantial one-day loss sometime in
the next six months, and Mary needs the Google stock price to continue
to grow over the next seventy years and does not care about what hap-
pens to the stock in the meantime (and certainly not over the next week).
Their investment horizons are different, their objectives are different, and
therefore what risk means to them is different. Each needs a different
riskometer. Unlike temperature, where Celsius is the appropriate unit of
measurement regardless of what it is used for, for risk we need different
concepts depending on the end use.
So the next step for Paul, Ann, and Mary is to pick a riskometer. The
riskometer is a combination of two things: some concept of what risk is
and a statistical apparatus to produce a risk measurement. Start with the
concept. With temperature, we have three units of measure Celsius, Fahr-
enheit, and Kelvin. However, they all measure the same thing, tempera-
ture, and we know exactly how to go from one to the other. 100° Celsius
is 212° Fahrenheit and 373.15° Kelvin. It is not the same with risk, where
we have multiple concepts.
When it comes to an individual stock’s risk, I may be interested in
volatility, Value-at-Risk, or Expected Shortfall, just to mention the three
most popular. These are not just three measurements of the same thing
like Celsius, Fahrenheit, or Kelvin. It is like having three different opin-
ions of temperature with no apparent way to compare numbers produced
under one standard with another. The user has to pick the concept of risk
most appropriate to her, and if she uses a generic one—a one-size-fits-all
riskometer—the result will not be as good as if she picked what is the best
riskometer for her purpose.
When it comes to risk, the objectives, the concept of risk, and the sta-
tistical methodology should be considered simultaneously. This means
that different end users, all with the same investment and technical skills,
ought to measure risk differently. In the example above, what is risk to
Paul is irrelevant to Mary and vice versa. There are a lot of riskometers out
there. Because it is not very hard for someone who knows programming
and statistics to create yet another one, it is not surprising that academia
and consultancies are full of people churning out riskometers, all produc-
ing different measurements of risk for the same assets. It can be an easy
way of getting a PhD in statistics, physics, computer science, or econom-
ics. The same PhDs tend to get jobs in the financial industry, producing
riskometers for their government agencies and banks.
There are few better places for seeing riskometers in action than
the European Central Bank’s risk dashboard. The dashboard is full of
interesting numbers (Figure 12). On the day after the Brexit vote in June
2016, it told us that systemic risk was 0.321853, on a scale of 0 to 1. That
was indeed much higher than the 0.185922 the week before, not to men-
tion the 0.058941 at the start of 2016, when we seemed to be especially
safe. The historically safest date was 27 September 2013, at 0.02106. For-
tunately, the Brexit systemic risk is not as bad as it was in December 2008,
when it hit 0.838846, worse than the number after Lehman failed in Sep-
tember 2008, when it was “only” 0.554620. These figures look exact. The
six significant digits tell us so: 0.321853 is much more accurate than a mere
0.3 and suggests the European Central Bank has pretty precise measure-
ment technology. But is that precision realistic? As Warren Buffett once
said, “We don’t like things you have to carry out to three decimal places.
If someone weighed somewhere between 300–350 pounds, I wouldn’t
need precision—I would know they were fat.”
Figure 13. Fat tails and large losses. Credit: Lukas Bischoff/IllustrationX.
I have daily observations on the S&P 500 index dating back to 1928.
What is the most obvious way to translate these daily observations into
some measurement of risk? If you ask someone who has just taken a uni-
versity course in statistics, they might say “easy.” Calculate the standard
deviation, assume the data follows the bell-shaped normal distribution,
and voilà, we can calculate the likelihood of any possible outcome.
The standard deviation of stock market returns has another name: vola-
tility. Using my statistical package of choice, R, I find that the daily vola-
tility of the S&P 500 is 1.1 percent. Knowing the volatility is really cool.
I can calculate once-in-five-hundred-year losses, just like the meteorolo-
gists did for Houston, Texas. Suppose I invest $1,000 in the S&P 500.
Based on this analysis, I would expect the typical worst day every year to
give me losses of $26, while the worst day in my 88 year sample should
be a loss of $39, and over a millennium $45. Not so shabby! Except, of
course, it is not like that. If I take my eighty-eight-year history of daily re-
turns on the S&P 500, I find that the average annual worst loss is $47, not
$26. These differences become bigger with time. The worst day over the
past eighty-eight years, 19 October 1987, would give me losses of $229,
not the $39 from the normal. Using volatility and the normal to measure
risk would have lulled me into complacency—the dragons would have
eaten me. This is summarized in Figure 14, which shows both the type of
losses I would expect if I used the normal distribution versus what actu-
ally happened.
are the world leaders in predicting an especially deadly type of risk: flood
risk. For good reason, 26 percent of the land area of the Netherlands and
21 percent of its population are below sea level, and every once in a while
the sea rises so high that it floods all the areas below the sea level. The last
time this happened was in 1953, when 1,863 people drowned. As a conse-
quence, the Dutch government decided to build a tall seawall called the
Delta Works. I went to see the Delta Works in the summer of 2021. It is
an impressive structure, and if you find yourself in the Netherlands, I rec-
ommend paying it and the memorial museum to the 1953 disaster a visit.
The problem was that while it is possible to build a sufficiently high wall
to protect against all flooding, it is really expensive. Therefore, the gov-
ernment of the time decided that it would be acceptable for the Nether-
lands to flood once every ten thousand years. The Dutch statisticians and
engineers were tasked with determining the Delta Works’ height to meet
that once-every-ten-thousand-year requirement. No surprise the Dutch
have become world leaders in tail risk. The mathematical technique they
came up with is formally known as extreme value theory (EVT), also
called the power law after the underlying mathematical equation, or even
Pareto analysis after the Italian economist Vilfredo Pareto, who formu-
lated it in the nineteenth century.
Over the years I have spent a lot of time at Erasmus University in the
Netherlands working with my good friend Casper de Vries, the first per-
son to apply EVT to finance.5 Suppose I use the methods we developed
to estimate the losses on the S&P 500 index. As the S&P 500 risk figure
above shows, we get much closer to what has been observed in history,
and the once-in-a-millennium prediction is undoubtedly much more re-
alistic than the normal distribution predicts.
Even with all that EVT expertise, all the Dutch experts can capture is
an imperfect measure of risk. When they decided on the Delta Works’
height, they used historical observations on sea height, and, being Dutch,
they had records stretching back almost a millennium. Unfortunately,
the world had changed since then—even the 1953 numbers are no longer
accurate, given climate change. It is the same problem that faced the me-
teorologist predicting Houston’s rainfall. The statistical analysis can be
correct only if the world does not change very much.
of the title are abbreviated ARCH. The ARCH model was a revolution-
ary step in the estimation of risk, and in 2003 Engle got a well-deserved
Nobel Prize for developing it. His key insight was to assign weights to
historical observations. Suppose we have three days of data and give today
a 50 percent weight, yesterday 30 percent, and the day before that 20 per-
cent in the volatility calculation. By doing so, we see that the ARCH
model kills two birds with one stone. It both captures the volatility clus-
ters, and its mathematical formulation allows for returns to be fat-tailed.
At least that is the promise.
How well does the ARCH family of models work? Reasonably well.
It certainly fits historical data better than if one simply uses the standard
deviation of returns as volatility. For many applications that is all that is
needed, a reasonably good idea of what type of volatility to expect to-
morrow. For fat tails or systemic risk, we have to look elsewhere. Because
ARCH and its ilk are fair-weather riskometers.
Misusing History
A riskometer should not aim to explain the past well. Instead, its purpose is
to look for currently unknown future mistakes.
how it expects news to evolve. They are a good snapshot of what is hap-
pening at any given time. What such data misses are deep vulnerabilities.
Take the European Central Bank’s systemic risk dashboard yet again. It
told us that systemic risk was, on average, 0.06677640 in 2004, the saf-
est year in its history. What is missed is revealing. We now know, with
the benefit of hindsight, that a crisis was around the corner. The critical
hidden vulnerabilities were all the structured credit products composed
of subprime mortgages and other high-risk assets, all the excessive risk
nobody knew about, the hidden liquidity risk that bites only in times of
stress. The relevant information was out there somewhere, but informa-
tion is not the same as knowledge. If nobody connects the dots, informa-
tion is irrelevant. Every financial institution saw only its own exposures,
not everyone else’s, and nobody thought to add up the numbers. The
information was out there somewhere, but trying to make sense of all the
data coming out of the financial system in real time is like drinking from a
fire hose. We have to pick and choose. And in 2007 almost nobody picked
subprime and structured credit. It is easy, with the benefit of hindsight,
to ask, “How could you miss that? It was all in front of you.” But in
real time, not all that easy. Ultimately, most riskometers use just trading
prices and volumes, disregarding all the other relevant bits of informa-
tion. It is not because someone is deliberately trying to obfuscate. It is
just all we can do technically.
The primary challenge in creating riskometers is that risk comes from
the future, but we only know the past. That creates particular problems,
not least what is known in statistics as data snooping. The problem arises
because before using a riskometer to make important decisions, we would
like to know how good it is. We would ideally put it in practice, wait a few
years, and then pass judgment. We are too impatient for that, and what is
usually done instead is to figure out how well a riskometer measures his-
tory using a procedure called backtesting.
Suppose I take my long sample of S&P 500 prices and use all the obser-
vations from the beginning of the sample until the end of the year 1999
to forecast risk with a riskometer on the first day of the new millennium.
That gives me one forecast observation, which I can compare to the ac-
tual market outcome that day. I then move up one day and use all the
information up to and including the first day of the year 2000 to forecast
risk on the second day. By repeating this every day until today I will get a
few thousand observations and then use that information to evaluate the
quality of my riskometer. That is backtesting, something routinely done
in practice.
However, while it sounds like a fantastic way to evaluate the quality of
a riskometer, there is a problem. I know what happened between the year
2000 and now—risk was very low in the mid-2000s, we had a big crisis
in 2008, and all the other things that happened. The temptation is then
to try to explain as much as possible of what happened in the past—espe-
cially finding a way to predict the crisis of 2008.
When I was in graduate school, I took a course in Soviet economics,
and the professor told us a Russian joke: the best Soviet historians are
experts in forecasting history because government policies were justified
with reference to history. Thus history had to be “correct” if the histo-
rians were to survive. If I try enough riskometers, I will explain history
well, and I will predict the crisis in 2008 years in advance. A lot of re-
search has done precisely that. All of that work is useless because of what
statisticians call spurious correlation, in which variables seem related to
each other even though in reality they have no such causal relationship.
There are many examples of spurious correlations; the correlation be-
tween divorce rates in the American state of Maine and the consumption
of margarine is 99.3 percent.6 When I told this to someone, he responded
by saying, “It is a good thing I stopped eating margarine,” thereby fall-
ing for two fallacies: spurious correlations and assuming that this general
observation applies to the individual.
What explains the 99.3 percent correlation between divorce rates in
Maine and margarine consumption is that if I calculate the correlation be-
tween every possible economic time series available—and there are hun-
dreds of thousands of them—I will find some that are highly correlated
purely by random chance. Some of these might even appear to be more
sensible than the spurious margarine and divorce correlation, leading the
unwary, or unethical, analyst to the wrong conclusion. As H. L. Mencken
put it: “For every complex problem there is an answer that is clear, simple,
and wrong.” When it comes to riskometers, if I try one out and one only
I will get the correct confidence intervals for my risk measurements. If,
however, I arrive at the same riskometer as a result of trying out a large
A few years ago, when I was reading the Basel III proposals for
the global postcrisis financial regulations, I asked myself: How well does
what is being proposed work? How accurate are the riskometers that im-
portant regulations are founded on? To my surprise, I found almost no
public research on this, and as far as I can tell, nowhere in the thousands
of pages of Basel III is the accuracy of the mandated risk measurements
discussed. There is plenty of work on the various technical aspects of risk-
ometers, along the lines of “Does Method A work better than Method B
in some particular situation?” Indeed useful, but it does not answer the
more fundamental question of how accurate the riskometers we depend
on to keep us safe from finance are.
First, I did a blog piece on this question and then joined forces with
Chen Zhou, a coauthor of mine now working in the Netherlands.7 Our
paper is called “Why Risk Is So Hard to Measure,” and what it does is
to investigate how the most common riskometers work in practice. We
started by taking daily prices of all publicly traded stocks in the United
States from the early 1960s, measuring both the risk and the accuracy
of those risk estimates by the 99 percent confidence bound, that is, the
range of values where the risk measures will lie 99 percent of the time. We
then did a Monte Carlo experiment, simulating random outcomes from
our computer world based on our assumptions of how the world works,
measuring the risk in those outcomes. The benefit of such an approach is
that because we created the world, we knew exactly what the risk was and
therefore how accurate the riskometer’s estimations were.
Stability is destabilizing.
—Hyman Minsky
how we think the world is or ought to be, not how it is. Uncertainty, not
risk, drives economic activity. Investments and profits and losses happen
because we do not anticipate perfectly how events occur. Since every per-
son has different expectations about the future, their expectations drive
the uncertainty that drives the economy.
The same year Knight published his work on risk and uncertainty, John
Maynard Keynes took a more nuanced view, which he then further re-
fined in his General Theory of Interest, Employment and Money (1936): “By
‘uncertain’ knowledge . . . I do not mean merely to distinguish what is
known for certain from what is only probable. The game of roulette is
not subject, in this sense, to uncertainty. . . . The sense in which I am us-
ing the term is that in which the prospect of a European war is uncertain,
or the price of copper and the rate of interest twenty years hence, or the
obsolescence of a new invention. . . . About these matters there is no
scientific basis on which to form any calculable probability whatever. We
simply do not know!”1
His starting point was that humans are not naturally rational beings able
to anticipate and forecast everything perfectly, as the nineteenth-century
classical economists would have it. Instead, people routinely follow animal
spirits, instincts that guide human behavior. Keynes came to be rather
dismissive of statistical analysis, rejecting the notion that decisions can be
made on the basis of the frequency of past events—ergodicity. Instead,
he focused on the “degrees of beliefs” that humans can have, given their
knowledge at a given time, about the occurrence of future events.
My favorite expression of that sentiment comes from an economist
customarily thought of as being very far from Keynes: Ludwig von Mises,
one of the leaders of the Austrian school. Mises criticized economet-
rics—the statistical analysis of the economy—in 1962: “As a method of
economic analysis econometrics is a childish play with figures that do not
contribute anything to the elucidation of the problems of economic real-
ity.”2 It is delightfully ironic that many of Keynes’s followers went on to
do exactly what Mises and Keynes argued against, creating statistical and
mathematical models of the economy, treating uncertainty as risk in the
same way the nineteenth-century classical economists did.
The Nobel Prize winner George A. Akerlof, in a paper called “What
They Were Thinking Then: The Consequences for Macroeconomics dur-
ing the Past 60 Years,” makes the point that Keynes’s disciples deliberately
ignored his warnings about risk and financial crises so they could derive
simple mathematical models of how the macroeconomy works—models
which soon were proven to be not only wrong but also dangerous.
In 1945 Friedrich von Hayek, while a professor at the London School of
Economics, published an article titled “Use of Knowledge in Society” in
which he expressed views similar to those of Akerlof, agreeing with both
Knight and Keynes that quantifying risk was impossible. Hayek, however,
came to the point from a different direction. Instead of people being
motivated by Keynes’s animal spirits, he argued that it was technically
impossible to describe the world with precise mathematical statements:
“If we possess all the relevant information, if we can start out from a given
system of preferences, and if we command complete knowledge of avail-
able means, the problem which remains is purely one of logic. . . . This,
however, is emphatically not the economic problem which society faces.”3
Hayek was writing during World War II, when the economic policy de-
bate was between believers in central planning and Soviet-model scientific
socialism and those preferring a market-based economy. Most thinkers of
that era advocated central planning, seeing the capitalist system as crisis
prone and inefficient, where the Great Depression was caused by the fail-
ure of capitalism, while the Soviet Union escaped the Depression because
it was centrally planned. Almost every country in the world at the time
was actively considering central planning.
Hayek disagreed. To him, knowledge is dispersed among the various
individuals who make up society, information that is impossible to ag-
gregate into perfect knowledge. The farmer knows much more about
his fields than anyone else, he knows how best to plant his crops, and he
has a direct economic incentive to be as knowledgeable about his land
as possible. No central authority can acquire such information; all they
can hope for is some high-level summary information. This is why col-
lective farming does not work. The farmer makes better decisions than
the ministry of agriculture, and central planning of the economy cannot
be successful. In Hayek’s view, as long as markets are free of government
interference, market prices solve the problem of uncertainty, distilling es-
sential information into one number: the price.
Even though Keynes and Hayek are often seen as having very differ-
ent economic philosophies—a distinction much amplified by their disci-
ples—their views on uncertainty are quite similar. Keynes focused on how
should not become Hayekians, as the Keynesians were much worse than
Keynes and the Marxists much worse than Marx.”
The views of Knight, Keynes, and Hayek on risk and uncertainty were
mostly ignored after World War II. The disciples of Keynes, the most
influential of the three, disregarded this aspect of his work, preferring
instead to draw on the classical nineteenth-century views on risk in build-
ing the Keynesian models of the era. When Keynes himself stressed the
importance of uncertainty, the disciples paid little attention. Of course,
they did not see it that way—most economists of the era saw themselves
as the intellectual heirs to Keynes.
There are many reasons for this rejection of uncertainty in favor of
risk. Much better data collection after the war, new statistical techniques,
and the availability of computers to do calculations all led to a sense of
can-do. It was inevitable that we neglected uncertainty in our constant
attempts at controlling our environment. Risk and mathematical descrip-
tions of the world are essential for that, and uncertainty just gets in the
way. It tells people what they do not want to hear: it is not as easy to
measure and control. Invariably, the risk view wins out.
The person who formalized that best was Wassily Leontief, an econo-
mist educated at the University of Leningrad who became a Harvard pro-
fessor and eventually a recipient of the Nobel Prize. He saw the objective
of economics as the collection of facts and figures followed by mathemat-
ical models describing the relationships. This culminated in an approach
he called the input–output model, which reduces the entire economy
into a set of equations. The output of one sector becomes either an input
into other sectors or final consumption. Leontief’s model became very
influential in the mid-twentieth century and was one of the foundations
of central planning as practiced by the Soviet Union and other commu-
nist countries to this day.
The input–output model had a significant impact on the development
of a method called linear programming at the statistical control group
of the US Army in World War II. It was successfully put into practice in
organizing the Berlin Airlift. The army flew the maximum amount of
cargo into Berlin, given constraints like the number of available runways
in Berlin and the prevailing weather. A small set of equations can describe
the problem, and what matters is risk, not uncertainty.
One member of the army’s linear programming team was Robert Mc-
Namara, who became the secretary of defense in the 1960s and was famous
for using Leontief’s philosophy in conducting the Vietnam War.4 His
management philosophy was based on measuring everything that could
be measured, most famously body bags, and then using those measure-
ments to control outcomes. What could not be measured did not matter.
The problem with this approach was ably demonstrated by the sociologist
Daniel Yankelovitch, who called it the McNamara fallacy: “The first step
is to measure whatever can be easily measured. This is OK as far as it goes.
The second step is to disregard that which cannot be easily measured or
give it an arbitrary quantitative value. This is artificial and misleading.
The third step is to presume that what cannot be measured easily really
is not important. This is blindness. The fourth step is to say that what
cannot be easily measured does not exist. This is suicide.”5 In all fairness,
McNamara was trying to put structure on a very complex problem, using
quantitative methods as tools in all the Johnson White House’s political
battles. He was not the first to use statistics that way and not the last. It
has been quite common in the Covid-19 crisis.
One of McNamara’s successors, Donald Rumsfeld, expressed a much
better understanding of risk and uncertainty in 2002. While widely ridi-
culed for it at the time, it has come to be seen as a brilliant statement of
decision-making in a period of uncertainty: “Reports that say that some-
thing has not happened are always interesting to me because as we know,
there are known knowns; there are things we know we know. We also
know there are known unknowns; that is to say, we know some things we
do not know. But there are also unknown unknowns—the ones we do
not know we do not know. And if one looks throughout the history of
our country and other free countries, it is the latter category that tends
to be the difficult ones.”6
The problem with Leontief’s input–output model is the same as that
of nineteenth-century classical economics, McNamara’s warfighting phi-
losophy, modern financial regulations, and risk measurements. They all
create a simple, caricature view of a world that is infinitely complex. The
models assume all relevant factors can be summed up into simple math-
ematical equations and do not leave any room for uncertainty or com-
plexity or technological progress. Perhaps most important, they depend
on accurate measurements.
Subjective Probabilities
One person who continued the work of Keynes and Hayek on risk
and uncertainty was George Lennox Sharman Shackle. Shackle started
writing his PhD at the London School of Economics in the 1930s, with
Hayek as his advisor. However, after Keynes published his General Theory,
Shackle dropped Hayek and did his thesis on Keynes’s ideas. Hayek did
not hold it against him, and they remained friends. Shackle is one of
the great underrated economists. I don’t recall seeing him mentioned
anywhere when I was in graduate school, and it was not until my LSE
colleague Charles Goodhart brought him to my attention that I started
reading his work. Shackle argued that it was not possible to calculate the
probability distribution of all outcomes and thereby make rational eco-
nomic decisions. Economic data is not ergodic.8
Unlike Knight and Keynes, however, and their sharp distinction be-
tween risk and uncertainty, Shackle argued that something in between
Stability Is Destabilizing
An important element is missing in the work of the four think-
ers discussed above, and that is how uncertainty arises and affects the
likelihood of outcomes. Hyman Minsky provided that link. Focusing on
financial crises, he argued that perceptions of risk affect peoples’ risk-
taking behavior and hence the likelihood of financial crises far into the fu-
ture. Minsky developed his theories in the context of the mid-twentieth-
century economic theories of his time, such as Leontief’s input–output
models. Minsky rejected these as being too simplistic because they ig-
nored how investment decisions are made and how firms are financed.
His theory of investment and crises was based on distinguishing between
three types of financing. The first, which he called hedge financing, is
the safest. Firms borrow little, and when they do they repay the loans
directly out of cash flow. The second type is speculative financing, riskier
since firms rely on cash flow to repay interest but roll the principle over.
The most dangerous, Ponzi financing, is where the cash flow is not suf-
ficient to repay either principal or interest, so firms are betting that the
underlying asset will appreciate enough to repay their liabilities. The cur-
rent trend in Britain for buy-to-let real estate is a prime example of Ponzi
financing: people buy properties to rent them out, financing mortgage
payments out of rent and increases in the price of the property.
Hedge financing is the most stable, but the other two are much more
tempting to investors. Moreover, when the economy is growing there
seems to be little reason to give up profits and play it safe. Indeed, it is dif-
ficult to do so. Suppose an economy starts safe, using only hedge financ-
ing. This installs confidence and motivates people to take more risk and
use speculative financing. In the beginning, it all looks good; economic
growth increases, making Ponzi financing increasingly attractive. But over
time we run out of feasible investments, and ultimately it all ends abruptly
and a crisis ensues. Financial crises happen because we think the good
times will last forever, and there is no reason not to make use of Ponzi
financing. Investors want to take on more risk, often helped by a lax reg-
ulatory environment and government encouragement. Ultimately, this
culminates in an unsustainable speculative bubble and a crisis.10
The conditions for a crisis are, therefore, ripest when we think risk is
low. This means that one of the best predictors of a financial crisis being
around the corner is when the pundits start talking about the current pros-
perity as lasting forever, such as the permanent era of stability in the 1920s
and the great moderation in the two decades before 2007. Often justified
by arguing that the laws of economics do not apply because “our country
has just such fantastic economic policy” or “we have become so clever that
we have learned how to prevent crises, investing optimally and behaving
rationally, ensuring permanent stability and high growth.” Not so fast.
When we start seeing cultural reasons for prosperity, it is time to run.
Not surprisingly, Minsky did not get much recognition in his lifetime.
He was not doing mainstream economics and was rejected by the Keynes-
ian school he came from, both because of his emphasis on uncertainty
and because of his criticism of deterministic economic models. Yet he
was never wholly forgotten, and after the global crisis in 2008 he be-
came quite the celebrity. People in the know often call crises a Minsky
moment because his theory explains them so well. “If there is excessive
optimism in the boom period, it will lead to an accumulation of conflicts
[in the economy], which may end up with a so-called Minsky moment,”
said Zhou Xiaochuan, then governor of the People’s Bank of China, in
October 2017. Meanwhile Minsky’s critics and those who shunned him
are long forgotten.
Minsky’s instability hypothesis implies that an observation of low risk
should create the conditions for a future crisis. This proposition was ex-
pressed in 2014 by the then chairwoman of the Federal Reserve Janet
Yellen: “Volatility in markets is at low levels. . . . [T]o the extent that low
levels of volatility may induce risk taking behavior . . . is a concern to me
and to the Committee.”11
The relationship between low volatility and crises has remained con-
jectural, not verified empirically. That got me curious, and I joined forces
with a couple of coauthors, Marcela Valenzuela and Ilknur Zer, to see if
such a relationship existed, writing a paper on the subject titled “Learn-
ing from History: Volatility and Financial Crises.” There are two reasons
this had not been done earlier. The first is that we need a very long history
of volatilities and crises, and the necessary data was not readily available.
To that end we collected data on monthly stock market observations,
spanning 60 countries and 211 years.
Still not sufficient. If we test a statistical model of the relationship be-
tween volatility and crises, we find it does not exist. That, however, does
not mean there is no relationship; it is merely more complex. What mat-
ters is volatility being different from what people have come to expect. To
test that hypothesis, we first need to estimate expected volatility and then
use deviations from that in a statistical model. Low volatility is, then,
when volatility is below expectations and, conversely, high volatility is
where volatility exceeds expectations. And low volatility turns out to be
statistically significant.
We find a clear chain of causality. Unexpectedly low volatility sends
the all-clear signal. We therefore have no qualms about taking on more
risk. To do that, we borrow money to make risky investments. In the
short run, all is fine, but over time, some or even many of the loans turn
out to be dodgier than expected. Perhaps they were used to invest in real
estate, creating a house price bubble that eventually bursts. Not surpris-
ingly, loan defaults mount, banks get into difficulty, and a crisis ensues
(Figure 17). Unexpectedly low volatility predicts both rapid credit growth
(increasing leverage in the banking system) and the incidence of crises up
to ten years into the future. Perhaps surprisingly this causal relationship
holds only for unexpectedly low volatility. High volatility has no predic-
tive power for crises. It is only associated with them. In other words, high
volatility happens at the same time as crises and cannot be used as a crisis
predictor, only as a crisis indicator.
Goodhart’s Law
The last step in the evolution of how we see risk and uncertainty
is the analysis of what happens when governments try to regulate eco-
nomic activity by targeting measurements of the economy. This link was
made by my LSE colleague and, I am privileged to say, coauthor Charles
Goodhart. Very few are as adept at distilling complicated government
policies into the bare essentials, identifying what works, and clearly dem-
onstrating why other policies are destined for failure.
He is best known for Goodhart’s law: “Any observed statistical regu-
larity will tend to collapse once pressure is placed upon it for control
purposes.”12 His law has a natural implication for regulations. Once the
government tries to regulate some activities, they immediately become
unreliable as indicators of economic trends. The context of this statement
relates to the use of monetary policy to achieve the optimal trade-off
between inflation and unemployment. This all started when Bill Phil-
lips, a professor of economics at the LSE, wrote a paper implying that
there might be a long-term, stable, and negative relationship between
unemployment and inflation. The lower the unemployment, the higher
the inflation. Various economists then suggested that central banks and
The idea that the economy obeys laws akin to the laws of physics is
seductive. All we have to worry about is risk, and we can ignore uncer-
tainty. We then can control an unruly economy, allocate resources in the
best possible way, manage risk, and prevent calamities. If only it were
so. The economy is different from the physical world. It is based on the
behavior of human beings, many of whom are intent on doing exactly
what they want to do—rules or no rules. Recognizing that fact is the
genius of the six thinkers discussed above. They all understood it is only
in the physical world that we can describe the relationship between out-
comes and probabilities precisely—where all that matters is risk and un-
certainty is irrelevant. In physics, math captures all; not so in the society
of humans that form the economy.
109
tant. Every bridge is designed to move with the elements, and the Mil-
lennium Bridge was supposed to sway gently in response to the Thames
breeze. Soon after it opened it was hit by a gust of wind and moved side-
ways, as expected. The pedestrians’ natural reaction was to adjust their
stance to regain balance—lean against the movement. Herein lies the
problem. They pushed the bridge back, making it sway even more. As an
ever-increasing number of pedestrians tried not to fall, the bridge swayed
more and more. What happened was that when at least 167 pedestrians
crowded onto the bridge in windy conditions, a feedback loop emerged
(Figure 20). It was always present, lurking in the background, but needed
particular conditions to emerge.
It is the same in the financial system. The distinguishing feature of
all serious financial crises is that they gather momentum from the en-
dogenous responses of the market participants themselves, like a tropical
storm over a warm sea gains energy as it develops. As financial condi-
tions worsen, the market’s willingness to bear risk disappears because the
market participants stop behaving independently and start acting as one.
They should not do so, as there is little profit and a lot of risk in just fol-
lowing the crowd, but they can be forced to by circumstances.
So, how did Arup miss the potential for the Millennium Bridge wob-
bling? For the same reason, most financial regulations fail to prevent sys-
temic risk. Arup modeled the impact of individual members of the crowd
even those which average opinion genuinely thinks the prettiest. We have
reached the third degree where we devote our intelligences to anticipat-
ing what average opinion expects the average opinion to be. And there
are some, I believe, who practice the fourth, fifth and higher degrees.”1
The readers did not choose their favorite based on whom they thought
the prettiest. Rather, they voted strategically to maximize the chance of
voting with the majority and so get a lottery ticket. In the same way,
speculators don’t choose stocks on the basis of the fundamentals of the
company, they try to out-think other speculators.
In 2002 Hyun Song Shin, then a professor at the London School of
Economics (and now the economic advisor and head of research at the
Bank for International Settlements, the central banks’ central bank), and
I proposed a new direction for the literature on risk and uncertainty. We
focused on the origin of risk and how the behavior of people leads to risky
outcomes. In our view risk can be either endogenous or exogenous.
A dictionary definition of the term “endogenous” refers to outcomes
having an internal cause or origin. How an infectious disease spreads
through a population is endogenous to the nature of that same popula-
tion. If we always keep a safe distance between ourselves and our fellow
countrymen, we will not get infected, but if we choose to live cheek by
jowl with other people, our chance of infection is high. Your chance of
getting a cold is endogenous to your behavior and those around you:
one reason why taking the New York subway can be hazardous to one’s
health; and why social distancing was so important in the Covid-19 crisis.
The opposite of endogenous is exogenous, whereby outcomes have an
external cause or origin. When an asteroid hit the Gulf of Mexico sixty-
five million years ago, wiping out the dinosaurs, that was an exogenous
shock. There was certainly nothing the dinosaurs did to cause their de-
mise. The risk of an asteroid hitting Wall Street is exogenous.
Suppose I wake up this morning and see on the BBC website that there
is a 50 percent chance of rain. If I then decide to carry an umbrella when
leaving my house, my doing so has no bearing on the probability of rain.
The risk is exogenous. Suppose, instead, I wake up this morning and see
on the BBC website that there is some negative economic news about the
United Kingdom, and in response I decide to buy a put option on the
pound sterling—I profit if the pound weakens. My actions make it more
likely the pound will fall. Not by a lot, mind you, a tiny, tiny amount. But
tiny is not zero—there is an endogenous effect.
I would not be doing anything wrong. On the contrary, I am behaving
prudently, hedging risk, like the pedestrians on the Millennium Bridge,
who were trying not to fall into the water. Just like a single pedestrian
on the Millennium Bridge did not make it wobble, me alone buying this
put option will not make the pound sterling crash. Another ingredient
is needed, some mechanism coordinating the actions of many people so
when we act in as one, our combined impact is strong. The chance of that
happening is endogenous risk. All that is needed to turn some shock—
the financial market version of the gust of wind hitting the Millennium
Bridge—into a crisis is for a sufficient number of people to think like me
(all wanting to prudently protect themselves) for the currency to crash.
It is the self-preservation instinct of human beings that so often is the
catalyst for crises.
No crisis is purely endogenous or exogenous: they always are a combi-
nation of an initial exogenous shock followed by an endogenous response.
The same initial exogenous shock can one day whimper out into nothing,
and the next blow up into a global crisis. When Covid-19 infected the first
person in Wuhan, all the subsequent events could have been averted if
that person had behaved differently. But nobody knew at the time. The
reason the exogenous shocks blow up into a crisis is that they prey on hid-
den vulnerabilities no one knows about until it is all too late.
prices, taking them away from their fundamental values. In extreme cases,
prices can become so distorted that they lead to undesirable extreme out-
comes, like bubbles and crashes. That said, these constraints don’t bite
all of the time, not even most of the time. It is only in times of stress
that they significantly affect market prices. That observation goes a long
way toward explaining why we decide to use riskometers and why they
perform so poorly. The riskometers describe the world quite well when
everything is quiet but do not capture behavior changes in times of stress:
why they are fair-weather instruments.
What does this mean for regulations? That we should do away with all
the rules and constraints? Far from it. The rules are by and large very ben-
eficial: helping to keep the financial system orderly, protecting investors,
and preventing abuse. However, they do have a dark side. Well-meaning
rules can act as a catalyst for making market participants act in unison,
just like the Millennium Bridge’s design made all the pedestrians march
like soldiers. A good example is buying stocks on margin.
The first time stock markets became accessible to the general public was
in 1920s America, as buying stocks was the preserve of the wealthy and
connected before that. After World War I anyone in America could buy
stocks. Not only that, but they also didn’t have to invest much money.
One could bring $100, borrow $900, and buy $1,000 worth of stock—be
leveraged nine times. This buying on the margin became very popular.
Money poured into the stock market, and since the United States was the
only country where stock markets were fully open to the general public,
money flowed to Wall Street from all over the world. The result was the
mother of all stock market bubbles, one that came to a sticky end in Sep-
tember 1929, triggering the Great Depression.
The dark side of the well-meaning margin rules shows its face when
the bubble is bursting. Suppose I buy ten shares of IBM at $100 each,
$1,000 in all. I put up 10 percent, or $100, of my own money and borrow
$900. The entity financing this transaction will want some protection
against the stock falling in price, insurance called a margin. Suppose the
margin is equal to the initial $100. If IBM’s price falls by 5 percent to
$95, the investment is now worth only $950. I still owe $900, while my
net value ($1,000–$950) has fallen to $50. However, the entity financing
the transaction insists on 10 percent of the original amount, $100, so I
have to make up the $50 shortfall. This is called a margin call, an immedi-
ate demand for $50. I have two choices: sell enough of the stock to pay
back the borrowed funds or find $50 elsewhere. Many investors will have
no choice—they can’t find the $50 in time and have to sell. How likely
is a day like that? I looked at the history of daily IBM stock prices and
got 22,696 observations. Out of those, there are 117 days when the price
of IBM fell by 5 percent or more, so the likelihood of a 5 percent price
drop is 0.52 percent, or about once every 9½ months. If the price falls by
10 percent or more, we are wiped out; such days have happened 13 times,
or almost one day out of every seven years.
Investors may have no choice but to sell the stock to meet the margin.
Yet if a large number of investors are in the same situation, a significant
volume of sells hits the markets simultaneously. Prices fall even more,
creating yet more margin calls, more investors have no choice but to sell,
and prices fall more. An endogenous risk vicious feedback loop emerges
from the dark (Figure 22).
Here, the margins work like the wobbly Millennium Bridge. Pedes-
trians were trying not to fall, leaning against the swaying of the bridge,
perversely causing it to sway even more. In the case of margins, it is the
automatic protection for the lenders that is at the root of the damage.
One of the enduring images of the crash of 1929, even if now debunked,
is desperate investors jumping out of the sky-high windows of their Wall
Street offices. The putative reason was margins. Investors got margin
calls, their capital was wiped out, and they were set to be declared bank-
rupt by the end of business that day.
the future. Suppose the price of Apple is $200 today. If we buy a five-year
put option on Apple with a strike price of $180, we get the right to sell an
Apple stock for $180 whenever we want for the next five years.
What if no option is available? Or the option is too expensive? Leland’s
and Rubinstein’s portfolio insurance solved that with the magic of finan-
cial engineering: what is formally known as dynamic replication, whereby
one can create a financial instrument that looks and behaves like an op-
tion. So, if it looks like a duck, swims like a duck, and quacks like a duck,
it must be a duck, right? Not quite. There is a key difference. In portfolio
insurance, one has to buy or sell the asset being insured every day to
replicate the option correctly. If the price increases, we have to buy, and
if the price falls, we have to sell. In other words, a buy dear–sell cheap
strategy—we apparently violate the laws of supply and demand.
All that was needed for portfolio insurance to cause a crisis was for a
sufficient number of people to use it as a trading strategy. And that is what
happened in September 1987. The US government postmortem of the
1987 crash estimated that around $100 billion was placed in formal port-
folio insurance programs, representing around 3 percent of the precrash
market capitalization. From Wednesday, 14 October to Friday, 16 October
1987, the market declined by around 10 percent. The sales dictated by
those who used portfolio insurance strategies amounted to $12 billion,
but actual sales were only $4 billion. This meant a substantial amount of
pent-up selling pressure accumulated over the weekend, causing the S&P
500 index to fall by 23 percent on Monday, 19 October.
The stock market crash of 1987 is a classic example of the destabilizing
feedback effect on market dynamics of concerted selling pressure from
mechanical trading rules, like the sell-on-loss considered here. Again, just
as in the Millennium Bridge example, the underlying destabilizing be-
havior is entirely invisible so long as trading activity remains below some
critical but unknown threshold. It is only when this threshold is exceeded
that the endogenous risk becomes apparent, causing a market crash.
A common view of crises maintains that they arrive from the outside,
like the above asteroid that is about to hit Wall Street. That is not true.
The main driver of crises is endogenous risk, underpinned by the system’s
hidden mechanisms, just like sell-on-loss trading rules. Unfortunately,
there is too much of a tendency to focus on the triggers of crises, not the
Figure 23. Money for nothing. Actual and perceived risk. Credit: Lukas
Bischoff/IllustrationX.
the exits, and the prices crash immediately. When things are good, we are
optimistic and buy, which endogenously increases prices—the bubble
feeds on itself. This eventually goes into reverse, and negative news preys
on falling prices, with the markets spiraling downward much faster than
they went up. Yet again, prices go up the escalator and down the lift (or
elevator if in America). Only after the prices have crashed does perceived
risk increase. By then it is too late.
So, what happens to actual risk through all of that? It increases along
with the bubble. After all, it captures the fundamental risk, the risk of a
market crash, hence falling when the bubble deflates (Figure 23). I often
wish someone could have convinced the financial regulators that the risk
after 2008 was much lower than the risk before, and they should have
been encouraging risk-taking and not derisking the system.
On the day I am writing this, the VIX is 35. Suppose you see the same
value when you read this and get the brilliant idea to log onto your online
broker and buy a short VIX fund—betting it will fall to the long-run av-
erage. Think twice. While it is almost certain the VIX will fall, the cost of
maintaining the short VIX fund might exceed 35 percent a year, so unless
the VIX falls relatively soon you are in for losses.
That can happen quite quickly. Just ask the unfortunate investors in
Nomura’s Next Notes S&P500 VIX Short-Term Futures Inverse Daily Ex-
cess Return Index ETN, which lost 96 percent the very same day Nomura
launched the fund on 5 February 2018. What happened is that the VIX
had fallen from 28 at the start of 2016 to 13.54 by the end of January
2018, and, if one follows the basic principles of momentum investing, the
VIX was destined to fall more. Nomura launched its short VIX fund on
Monday morning Japanese time with the previous week’s VIX value and
quickly sold ¥32 billion worth to investors. Unfortunately, heightened
uncertainty hit the US market the same day, so when the US stock market
opened the VIX quickly rose to 37.32—boom, Nomura’s luckless inves-
tors lost ¥32 billion in the blink of an eye.
Back to LTCM. It was eyeing the steady increase in VIX throughout
1998 and decided to get in on the action. However, because the profit
margin on betting that the VIX would fall was too small for LTCM it
opted to borrow to increase leverage. LTCM became so prominent in
the volatility market that it earned the nickname “The central bank of
volatility.”
Except, as LTCM piled into the market, the VIX continued to rise.
LTCM suffered losses of −6.42 percent in May 1998 and −10.14 percent
in June. Then Russia defaulted in August, triggering a market panic, and
the VIX continued to rise, reaching 45 in August. By early September
LTCM’s equity had tumbled to $600 million. As the debts were constant
and the positions worth less, leverage rose sharply, reaching 125. LTCM
was in serious difficulty, getting margin calls it did not have the cash to
cover and hence was forced to liquidate its positions. That made the VIX
rise even more. A vicious feedback loop was set in motion: higher VIX
causing margin calls, leading to liquidation, making the VIX rise further,
repeat.
book A Random Walk Down Wall Street. But the good times don’t last.
Endogenous risk will show its face when some trigger pushes us to act in
concert with each other, like the pedestrians on the Millennium Bridge.
Price movements will be amplified—bubbles and crashes.
The spirals of coordinated selling that are unleashed by endogenous
risk are normally held back by the inherent stabilizing forces in the mar-
kets: the arbitrageurs, the hedge funds, the sovereign wealth funds, the
Warren Buffetts, the Soroses. They step up to the plate in crises to buy
cheap assets, putting a floor under prices. An excellent expression of this
is from Baron Rothschild in the eighteenth century, who is reported to
have said, “Buy when there’s blood in the streets, even if the blood is your
own.” The self-interest of the speculators benefits society, as in Adam
Smith’s classic statement, “It is not from the benevolence of the butcher,
the brewer, or the baker that we expect our dinner, but from their regard
to their own interest.”
What lets endogenous risk off its leash is decisions and policies that
serve to harmonize market participants’ behavior. Rules that prevent
them from erecting a floor under the markets by buying all the assets that
are so undervalued because of the crisis. The riskometer puts the contrast
between endogenous and exogenous risk in the sharpest relief. Almost
all methods of measuring risk are based on the assumption that risk is
exogenous because that is the easiest way to deal with risk. All one has to
do is to collect some daily historical financial data—market prices, credit
default swap spreads, interest rates, trading volumes—and feed them into
a riskometer. Nothing wrong with that if all we care about is exogenous
risk: short-term fluctuations.
If, however, we care about tail risk, banks failing, and crises, we have no
choice but to find some way to measure endogenous risk. That is not easy.
Large losses and crises happen because of risk everybody missed, so iden-
tifying endogenous risk before it is too late is like searching for a needle
in a haystack, except that we don’t even know what the needle looks like.
We know only that the needle is there when we stick our hand into the
haystack and it gets pricked. After a crisis, everybody knows what went
wrong, and we prevent a repeat: closing the barn door after the horse has
bolted. Meanwhile, the forces of the next crisis start gathering strength
somewhere where nobody is looking.
The 2008 crisis is an excellent example of how we missed all the warn-
ing signs. While I suspect many people thought financial products based
on the American housing market were dodgy and opted not to buy them,
a few profited from it. The book (and movie) The Big Short by Michael
Lewis tells the story of a few plucky players who did exactly that. Mean-
while, no government authority had an inkling of the danger lurking
right under their noses. The lesson from 2008 is that the buildup of en-
dogenous risk happens almost wholly out of sight. Eventually, the vulner-
abilities hit the hidden trigger—risk got a little bit too high on a certain
day when the markets had little tolerance for it—and it all blew up.
The challenge for investors and the financial authorities is that while
endogenous risk is ever-present, it cannot easily be measured. As endog-
enous and exogenous risk move in opposite directions, the riskometers
get it wrong in all states of nature, reporting too little risk before a crisis
and too high after it, just like the European Central Bank’s systemic risk
dashboard.
We often run into street artists who, for a small fee, will draw a
caricature of us. One that exaggerates facial features for comical effect,
perhaps emphasizing the nose and diminishing the chin. One example is
the caricature of me made by Ricardo Galvão, who did many of the draw-
ings in this book (Figure 24). Risk measurements are just like those street
caricatures. The riskometer is the creation of its designer, who makes
all sorts of decisions, balancing ease of implementation with accuracy,
128
emphasizing what he cares about, and dismissing the rest. Riskometer de-
sign is highly subjective, and two designers faced with the same problem
will create riskometers that measure risk quite differently.
The reason for all of the subjectivity is that risk is not directly mea-
sured, like prices or temperature. It can be inferred only by how prices
move, and to do that we need a model. And since by its very nature every
model is subjective, the risk measurements are a product of the underly-
ing assumptions. And that means the accuracy of risk measurements is
much lower than is generally presumed. Certainly much less reliable than
the thermostat that keeps the risk manager’s office at a steady 22°C. More
important, all that subjectivity makes riskometers easy to abuse. Perhaps
with blatant dishonesty, as when someone deliberately tweaks the models,
so they tell us risk is $1 million when it is really $5 million. Plenty of that
going on, but I think intellectual laziness is more common. Using risk-
ometers to pretend risk is measured and managed so we can demonstrate
we are diligent and compliant. We follow all the regulations while doing
something else. And because it is so hard to verify that the riskometers are
accurate there is little the regulators, the compliance officers, and all the
other guardians of good practices can do.
A friend of mine, Rupert Goodwin, used to make a living selling risk
systems. One day he went to a bank that had just been audited by the lo-
cal financial authority. The regulator came in and asked the risk manager
if he used a riskometer. When the risk manager said yes, the regulator
ticked off a box and left. The most obvious way a riskometer goes wrong
is when it is simply a tick-the-box exercise:
to eat them, you are better off not knowing how they are made. Some
two decades ago I made exactly that mistake when I read a book called
Fast Food Nation by Eric Schlosser, which went into the gory details of
how fast food is made. Not many books have changed the way I live my
life, but this one did, and I have studiously avoided fast food ever since.
I would have been happier if I hadn’t read Fast Food Nation—ignorance
can be bliss. I suspect most senior bankers and regulators take the same
view, happier not knowing how the riskometers that govern their world
work. The easiest way to do that is to choose a measurement of risk that
somehow maps all the portfolios’ complexities and even entire financial
institutions’ risks into one number—Value-at-Risk. So long as the Value-
at-Risk number is within an acceptable range, all is fine.
The creator of the all-important Value-at-Risk riskometer, the JP Mor-
gan bank, came to get badly hit by its creation three decades after bring-
ing it into the world. That is when a member of the bank’s London staff,
nicknamed the London Whale, caused a $5.8 billion loss because he was
apparently trying to profit from the bank’s Value-at-Risk. It started with
Bruno Iksil, a senior trader in a division called the chief investment of-
fice (CIO), whose function was to hedge the bank’s credit risk. That
year the Value-at-Risk for the CIO division exceeded $95 million, prob-
lematic since the total target Value-at-Risk for the entire bank was only
$125 million.1
There are two ways to deal with this problem. Either reduce the risk of
the portfolio or change the riskometer. JP Morgan seems to have chosen
the latter. How do we know this? The person in charge of the Value-at-
Risk model at the CIO, Patrick Hagan, sent an email to his colleagues
with the subject “Optimizing regulatory capital,” using his private Yahoo
account. The email was subsequently made public by the Senate com-
mittee investigating the loss.2 And that exposes the crucial difference
between thermometers and riskometers. A favorite saying of President
Truman was, “If you can’t take the heat, get out of the kitchen.” When it
comes to risk: If you can’t take the risk, change riskometers.
As it turned out, JP Morgan’s new riskometer missed out on some of
the critical risks facing the CIO, risks that soon would lead to the $5.8 bil-
lion loss. In the words of JP Morgan’s quarterly securities filing: “This
portfolio has proven to be riskier, more volatile, and less effective as an
economic hedge than the firm previously believed.” Of course, if one de-
liberately picks a riskometer that signals the lowest risk, the portfolio will
be riskier than we think it is.
much else.3 Doing exactly that is a blatant abuse of the bank’s risk man-
agement system and presumably would be picked up by the risk con-
trollers. However, if one goes about it more subtly, it will likely never
be detected.
There are many similar ways one can manipulate riskometers. Some
are easy to detect, others are known only to the person taking the risk.
If you are skeptical and think I am just an academic making up extreme
examples that would never see the light of day in the real world, think
again. The reason the UBS bank failed in 2008 was precisely because of
what I’m describing here.
Many banks failed in the 2008 crisis, but the most interesting for the
topic at hand is the Swiss bank UBS. We are quite fortunate that the
Swiss Federal Banking Commission had UBS produce a postmortem
titled Shareholder Report on UBS’s Write-Downs. It is a fascinating docu-
ment, clearly and clinically highlighting all that went wrong: a superb
example of a desire to manipulate the risk management process and, ul-
timately, self-delusion. The main culprit was $19 billion in losses on col-
lateralized debt obligations (CDOs) composed of US subprime mort-
gages in 2007.4
Given that it used Value-at-Risk to measure the risk, UBS apparently
did not realize the CDOs were risky. Value-at-Risk is the worst risk-mea-
surement methodology for CDOs because by construction it cannot pick
up the risk in an asset with a steady income and the occasional substantial
losses.5 The bank could have done much better. It had comprehensive de-
tails on each mortgage it bought, and nothing prevented it from analyz-
ing these data, which would have told it that something was fishy. Indeed,
some of UBS’s competitors did avoid large losses on subprime mortgages
by carefully analyzing the data they had on them. The UBS risk managers
opted for a riskometer that was tailor-designed not to capture subprime
mortgage risk. This fed into the calculations of the bank’s overall riskiness
and was dutifully reported to senior management, the Swiss authorities,
and UBS’s auditor, Ernst & Young. None was concerned. UBS lost sight
of the fact that it was deceiving itself when it thought it was fooling the
regulators.
Another exploit of riskometers is to take advantage of the risk manage-
ment techniques that are meant to protect us. Use the regulations to hide
risk. Under the Basel regulations, banks are supposed to measure their
trading risk and report the risk numbers to the authorities. The technique
for doing so is one beloved by consultants and designers of dashboards:
traffic lights. The traffic-light rule says that a bank is allowed to have three
days a year where its losses exceed the Value-at-Risk measurement. The
banks are given some leeway, and so long as they exceed the Value-at-
Risk only four times a year they stay in the green zone. If they exceed it
between five and nine times they are in the amber (yellow) zone and if
they exceed more than nine times, the red zone.
A bank in the green zone must hold capital at least equal to three times
the Value-at-Risk; if in the amber zone, three to four times Value-at-Risk;
if it hits the red zone, four times the Value-at-Risk plus the likelihood of
being subjected to intrusive scrutiny. Obviously, the higher the Value-at-
Risk, the more the bank has to invest in low-yielding capital and less in
high-return, high-risk assets. It’s no surprise that the bank wants to mini-
mize its measured risk, hence the amount of costly capital it has to hold.
Recall the example of the London Whale.
Imagine the following: Suppose a bank knows its true Value-at-Risk,
but the regulator knows only what the bank tells it. Further, suppose the
bank fully intends to comply with the letter of the regulations, the traf-
fic lights, but still wants to take on more risk. What will happen? Chen
Zhou, a coauthor, and I looked into this question in a paper titled “Why
Risk Is So Hard to Measure.” As it turns out, it is quite easy to under-
report risk while remaining fully compliant with the traffic lights rule.
Particularly interesting to us was how banks react to being controlled by
the traffic rule. They will change the composition of assets, shifting from
assets that fluctuate a lot in price (but not excessively) to assets whose
prices are usually relatively stable, but on occasion are subject to sub-
stantial losses. In the language of finance, the banks come to prefer assets
with lower volatility and higher tail risk. Such assets have the advantage of
making the bank look good, but at the expense of making it more likely
the bank will fail. Like UBS’s CDOs.
Finding such assets is easy. As we had CRSP, the entire database of
daily stock returns in the United States, on hand, we simply searched for
stocks that simultaneously matched some profitability criteria and stan-
dard risk management standards while staying in the green zone. The
results confirmed the theoretical prediction. Because of how the regula-
tions worked, the best way to bypass them was to increase the very risk we
don’t want the banks to take. In all fairness, the example above applies to
Basel II, which now has been surpassed by Basel III, which also replaces
Value-at-Risk with Expected Shortfall, which is not subject to this par-
ticular exploit.
Figure 26. Capital ratios for European banks before the 2008 crisis.
Credit: Lukas Bischoff/IllustrationX.
All the riskometers and clever pricing models failed to anticipate such
an eventuality. Losses were substantial. Global Alpha lost 7.7 percent in
July 2007 and a further 22.7 percent in August. It was not alone, as most
other quant funds suffered similar losses. By comparison, the US stock
markets fell 3.3 percent in July and increased 1.3 percent in August. Global
Alpha was eventually closed in the autumn of 2011 after even more losses.
What happened was that the masters of the quant funds overestimated
their cleverness. Each thought it was the smartest, using unique and
state-of-the-art techniques to make money. They weren’t, as underneath
they were all doing the same thing. It wasn’t visible when nothing was
happening as in the tranquil years of the great moderation. All that was
needed was a trigger. Typical endogenous risk crash, just like the wobble
of the Millennium Bridge.
Crises happen when a hidden risk factor hits an unseen trigger, pre-
cisely what happened here. The hidden risk factor was how all the algo-
rithms were programmed to react in the same way to market turmoil.
Completely unknown until a sufficiently large price drop triggered their
self-preservation tendencies: David Viniar, the then Goldman chief finan-
cial officer, was tasked with explaining Alpha fund’s losses to the world.
“We were seeing things that were 25-standard deviation moves, several
days in a row.” Under the normal distribution, the only way such a state-
ment has any meaning, a 25-standard-deviation loss is expected to happen
one day out of 10137, which is 1 with 137 zeros after it. For comparison,
NASA tells us that the universe’s age is about 14 billion years, while the
Earth is a lot younger at 4.5 billion years.
Either the quant funds got really, really unlucky or they weren’t nearly
as clever as they thought they were. Through the lens of endogenous
risk, the 2007 Quantland crisis was inevitable. The combination of the
great moderation with aggressive market participants aiming to exploit
the tranquility. After all, what is our natural reaction when we perceive
risk as being very low? Certainly not to sit on our hands and be conser-
vative. No, we follow Minsky and take more risk. In the beginning with
no impact on the markets, but eventually more and more funds pile into
the action. Hugely profitable at the start. Returns are higher and higher
because everybody enters the markets, while measured risk remains low
because the prices are going up steadily but with little volatility.
The riskometers couldn’t fathom such an eventuality in 2007. Because
it wasn’t in the data, it wasn’t detected. It wasn’t in their DNA. Funda-
mentally, when the quants started to use the riskometers to make invest-
ment decisions they changed the financial system, undermining the qual-
ity of the risk measurements. The 2007 quant crisis was inevitable. Risk is
endogenous even if the riskometers assume otherwise.
about creating CDOs and marketing blurbs on why they were fantastic.
All about alchemy: the ease and benefit of turning high-risk junk assets
into gold. Nothing about risk.
I ended up calling around former students working in the financial in-
dustry who were kind enough to connect me to experts willing to talk me
through the mechanics of CDOs. What I learned was decidedly fright-
ening. If I want to get the current price of a stock, all I have to do is to
look up the price on Bloomberg. Not that simple for CDOs, as there is
no market price to be found there. Instead, I need a complicated model
just to get the price. And an even more complicated model is required
to get the risk. Plenty of places for things to go wrong. Not only that,
but the models used to get the prices and risk of the CDOs contained a
fatal flaw. The way a CDO works is that a bank buys risky debt. Subprime
mortgages, junk bonds, or anything, so long as it is risky. Suppose I
buy one hundred subprime mortgages. Over time some of the mortgage
borrowers may get into difficulty. They may get sick or lose their job or
whatever, and default. However, it is quite unlikely that all one hundred
default.
Now comes the alchemy. Every month I expect one hundred mortgage
payments to be made. I then promise Mary the first five payments, no
matter which mortgage they come from. Yiying then buys the right to
get the next twenty payments. The important thing is that if ninety-five
of the mortgages default, Mary gets all her payments but Yiying nothing.
Morgane buys the right to the next sixty-five payments, and Paul buys
the right to the payments from the last ten mortgages to be paid every
month. Mary’s investment is the safest and Paul’s the riskiest, so Mary
will pay the most for her right and Paul the least. These rights are called
tranches (French for “slices”).
Mary’s right to the first five payments is very safe and gets a AAA rat-
ing from the credit-rating agencies. These tranches are often called super
senior because it is so certain they will pay out. Yiying’s tranche is less safe
and gets only a AA rating, while Morgane’s are even lower at BBB. The
middle tranches are known as mezzanine, Italian for “intermediate floor.”
Paul is the last to be paid, and his tranche is called equity or, colloquially,
toxic waste. Suppose we assume that if one family defaults on its mort-
gage, that says nothing about the likelihood of any other family default-
Not many buyers wanted the equity tranches, so some smart bank-
ers found a way to tranche them up in a product called CDO-squared.
Clever, except the model risk is amplified because now we have the model
risk of the initial CDSs and the model risk of the CDO squared. Madness.
We even got some CDO-cubed!
Because the super senior and equity tranches were hard to sell, the
banks often held them on their books. Naturally, because the super se-
nior were AAA rated, no problem. While the equity tranches were risky,
the models didn’t find them excessively risky, so the capital charges were
limited. That is what caused all the bank problems in the second part
of 2007. As the CDOs got downgraded, all of a sudden the banks were
holding much riskier assets and they had a much lower value than initially
assumed. The banks found themselves in serious difficulty and close to
violating their capital constraints. The technical reason why UBS failed.
If one of the criteria for high profits is maximizing the use of junk as-
sets, it shouldn’t be surprising that it did not end well. Richard Bitner
describes this eventuality in his book Confessions of a Subprime Lender:
An Insider’s Tale of Greed, Fraud, and Ignorance. Nobody cared about
quality. All they wanted was the highest possible number of the riskiest
mortgages imaginable.
not too much. There is a perpetual tension between the risk managers’
and traders’ objectives, what economists call a principal-agent problem.
Because both the traders and risk managers know all about the strengths
and weaknesses of riskometers, a cat and mouse game ensues. The traders
try to exploit these weaknesses, whereas the risk managers try to close the
loopholes. One trick used by the risk managers is Chinese walls, a term
dating back to the Great Depression when the US government wanted
to minimize the conflict of interest between a bank producing suppos-
edly objective research on a company while also handling its initial public
offerings. Banks were allowed to keep both functions in the same bank
but were required to put the Great Wall of China between them. The
risk manager’s ideal case is when the riskometer is entirely hidden from
the traders—the Great Wall of China stands between them. The night-
mare for any risk manager is exercising control over a trader who is an
expert in riskometers. Even worse is when the traders used to work in
risk management or compliance before being promoted to trading and
know all the tricks. Jérôme Kerviel, the Société Générale trader who cost
his employer $6.9 billion, started his career in compliance before being
promoted to trading.
If Mary is a proprietary trader, all the risk manager should tell her is
that her risk is too high or too low or just right. He should not tell her
that her Value-at-Risk is $2 million and should be only $1.5 million or
even that he is using Value-at-Risk, and he should certainly not tell her
that he got the $2 million from a GARCH model. The more the trader
knows about the risk measurements, the easier it becomes for her to ma-
nipulate the risk management process. This is precisely what the student
in my executive education course was trying to accomplish.
The practical problem is that while Chinese walls sound great in the-
ory, they don’t work well in practice. After all, the risk model cannot be
hidden because it must produce results. Meanwhile, as profits are threat-
ened the Chinese walls lead to accusations of unfairness, discretion, and
incompetence. The resulting political game will be won by risk manag-
ers only immediately after a crisis, when memories of past losses are fresh
and fears widespread. In normal times the profit-making trader will have
much more political power than the risk manager. Senior management
turns a blind eye to transactions when significant profits are made. Just
like Credit Suisse, which muzzled its risk management department and
for its trouble lost at least $4.7 billion on Archegos Capital and cost
its clients up to $3 billion from the collapse of Greensill Capital. All in
early 2021.
Soon after I wrote “Risk and Crises,” a blog piece on the prob-
lems of riskometers, I got an interesting comment from a risk manager:
“As a risk manager I fully recognise the shortcomings of any model based
on or calibrated to the past. But I also need something practical, ob-
jective, and understandable to measure risk, set and enforce limits, and
encourage discussions about positions when it matters. It is very easy to
criticise from the sidelines—please offer an alternative the next time.”6
He is right. It is easy to fall into the trap of excessive nihilism, criticizing
without providing alternatives. The riskometer is useful, we have to man-
age risk, and if we don’t use the riskometer to do so, we might as well
stick a wet thumb in the air. So, in response I joined forces with a friend,
Robert Macrae, and wrote a couple of blog pieces titled “Appropriate
Use of Risk Models.” Our fundamental point is that risk is defined only
in terms of outcomes we wish to avoid, and that how best to estimate risk
critically depends on what we want. In chapter 5 above on the myth of
riskometer, I gave the example of Paul, Ann, and Mary, who each invest
in Google stock but for very different reasons. Even though each holds
the same portfolio and has access to the same technology, they measure
risk differently. Simply because Mary cares about risk seventy years in the
future, Ann in the next six months, and Paul the following week.
Robert and I came up with five principles of the appropriate use of
riskometers. The first is that risk comes from the future, but we know
only the past. All a riskometer can do is project the past into the fu-
ture, and many conditions have to hold for the projection to be accurate.
The actual history has to represent the future, with no nasty surprises in
store—ergodicity. Meanwhile, the person creating the riskometer has to
sticking the proverbial wet thumb in the air. While not perfect, risk-
ometers indicate risk and, if used correctly, can be quite useful. And if
abused, it can similarly cause a lot of damage. Just like medication. Good
risk managers tell me they use many riskometers simultaneously. By get-
ting multiple measurements of the same risky position and knowing how
each of the riskometers performs, they can combine subjective judgment
with the objective output of the riskometers in their decisions.
The potential for manipulation is real, as the examples of JP Morgan’s
Whale and UBS show, not to mention all the rogue traders. Manipulation
is hard to detect until everything goes awry, no matter how well inten-
tioned a bank is. Even if the senior management fully intends to obey the
spirit and the letter of the rules, internal incentives such as bonuses and
promotions will lead traders to take advantage of the risk management
system: to abuse the riskometer. What is especially worrying is how regu-
lations can encourage banks to take the worst type of risk—tail risk—and
so perversely make it more likely the bank will fail. The reason is simple:
the tendency to use riskometers not as a risk-control device but to maxi-
mize profits: the London Whale problem.
Does anyone care? Despite all the colossal failures in 2008, riskometers
are still in widespread use, and banks and regulators are increasingly us-
ing them. The simple truth is that the modern financial system would not
function without riskometers. If we ask someone to manage our money,
we need to monitor and control the risk they take. In practice, that means
using riskometers. Just remember not to ask for too much. To those
whose job it is to care about the entire financial system’s stability, the
riskometers’ failings are especially pertinent.
147
or a socialist hating the right-wing financial elite who would like nothing
more than to tax and regulate them out of existence. The only question
to consider: Is the banker telling you the truth? If you think she is, you
have no choice but to give in.
The reason is that no government can ignore a systemic crisis. The con-
sequences are so severe that we would do almost anything to prevent one.
Perhaps the worst outcome is that the empirical evidence suggests that
governments lose power after a major crisis. You would be out of a job,
and that would not do. When faced with the $50 billion request demand,
the more information you have about your financial system and the bank
in question, the better. That will not only help you figure out whether
you are being played, but also aids you in finding a better, cheaper solu-
tion. If you had had access to good information earlier, you might even
have been able to prevent the bank from failing in the first place.
The nineteenth-century US federal government did not impose much
in terms of regulations on banks, leaving that to the states. The United
States did not even have a central bank. When a crisis came along, the
private sector would sort it out. Not surprisingly, the United States was
very crisis prone, and there were increasing demands to establish a central
bank and start regulating finance. Still, politics was not in favor. It took
the severe financial crisis of 1907 to finally overcome the political opposi-
tion, and that was because of how the private sector dealt with the crisis.
In 1907 the most important banker in the country was John Pierpont
Morgan, the founder of the eponymous bank, which was later split by the
Glass-Steagall Act in 1935 into JP Morgan and Morgan Stanley. He acted
as the de facto central bank, providing liquidity himself and leaning on
other bankers to do the same, in the process enriching himself and his
buddies while punishing his enemies. The way he behaved in the 1907
crisis was seen as being beyond the pale, and the political leaders realized
that a central bank could not be avoided. It could not be called a central
bank for political reasons, but the Federal Reserve System, or the Fed,
was set up in 1913.
Even then, the Fed did not get many powers, nor did it want them.
During the Great Depression the Fed did everything it could to do noth-
ing. The definitive 1963 study by Milton Friedman and Anna Schwartz
showed that the Fed’s failure to increase liquidity in the Depression was
McChesney Martin Jr., the former Fed chairman, who said the Fed’s
most important job was “to take away the punch bowl just as the party
gets going.” The financial authorities face a difficult Goldilocks challenge:
not regulate too little, as the United States did in the nineteenth century;
and not too much, like Cuba or North Korea today. It has to be right.
When O&G failed, Walter Bagehot, then editor of The Economist, wrote
that the partners ran their business “in a manner so reckless and foolish
that one would think a child who had lent money in the City of London
would have lent it better.” The partners believed they would be bailed
out by the Bank of England, which sent three bankers to look at O&G’s
books. It did not take them long to realize O&G was broke. The Bank of
England faced a delicate decision: if O&G failed, there would be panic; if
it were saved, the other firms in the finance game would also expect to be
rescued. The Bank of England chose to let O&G fail. It is not clear why
it made that choice. Moral hazard was clearly a significant concern, but
other factors weighed on it as well. The Bank of England was a private
institution competing with the likes of O&G, and its future profits were
likely to be enhanced by the failure of an important competitor. Because
the Bank of England did not do any bailouts nor provide support of any
kind, even refusing to grant loans against government securities, panic
spread through the banking system. The market for otherwise safe assets
like government bonds—gilts—dried up.
The financial institutions of the nineteenth century were partnerships,
with the notable exception of the Bank of England. This meant that
O&G’s partners should have been liable for all losses, except they man-
aged to incorporate in the nick of time. The third senior partner had the
surname Barclay, and a few years later he started a bank with the epon-
ymous name, where the last Overend became the largest shareholder.
The partners of O&G eventually faced private prosecution because the
government did not feel they had done anything wrong and refused to
prosecute them. The only crime they could be charged with was theft. In
the private prosecution, the partners hired the government’s most senior
lawyer to defend them, so the same lawyer both represented the accused
and was the boss of the judges ruling on the case. The partners were
acquitted.
At the time, there were no established procedures for dealing with the
failure of a big bank. But as the O&G 1866 crisis turned out to be one of
the worst crises of the century, the government had no choice but to do
something. It made the Bank of England commission Walter Bagehot to
investigate how it should respond to future crises. He published a white
and why did the authorities not do anything? All we had to do was to
restrict credit. Really simple stuff.” Yet bubbles still happen. The prob-
lem is that their very existence can be verified only after they burst—
we need the benefit of hindsight. We can never be certain before the
bubble bursts. The cost of getting it wrong, calling something a bubble
incorrectly—what statisticians call a type I error, the incorrect rejection
of a true hypothesis—can be very costly.
Suppose the Chinese authorities had decided in 1995 that “oops, China
is overexpanding, we are in a bubble and have to slow down.” Then
China would not have enjoyed the 470 percent economic growth over
the next couple of decades. One can just as easily become dazzled by all
that growth. China will stop growing one day, and then its market will be
seen as having been in a bubble. But crying wolf is a game one is destined
to lose. It could happen tomorrow or in twenty years.
Even supposedly well-identified bubbles, like the internet dot-com
bubble of the 1990s, might not necessarily be such a bad thing. Bubbles
can cause prices to explode, but they also release a lot of funds for risky
investments, some of which end up being the seed money for the lead-
ing companies of the future. The online giants of today, like Amazon
and Google, originated from the dot-com bubble. Intel and many of the
semiconductor companies came out of the Nifty Fifty bubble in the early
1970s, while many companies like Coca-Cola had their IPOs during the
Roaring Twenties bubble. We don’t even know if we should call these
episodes bubbles. The Nobel Prize–winner Fama French, the godfather
of the theory of efficient markets and the person just about least likely
to believe in bubbles, argued recently, “For bubbles, I want a systematic
way of identifying them. It’s a simple proposition. You have to be able to
predict that there is some end to it. All the tests people have done trying
to do that don’t work. Statistically, people have not come up with ways
of identifying bubbles.”
It is difficult for the financial authorities to meet the Goldilocks chal-
lenge posed by bubbles. The political economy is always in favor of allow-
ing the bubble to grow. And the authority aiming to prick them faces a
dual risk. If they prick the bubble, they will always be accused of destroy-
ing growth. If they allow it to happen and things then blow up, they will
be accused of incompetence or worse.
The Icelandic banks found a neat way around this. Bank A sold the
newly issued equity to bank B, and bank B sold its newly issued equity to
A. While this might mean the banks are now exposed to each other, they
then made a contract for difference, compensating each other for losses
and reimbursing each other for profits. If the price of A fell, A would
make up B’s losses, and, conversely, if A’s stock price went up, B would
give the profits to A. To the Icelandic regulator and its strict tick-the-box
legal point of view, no problem. Of course, it was all fake and, while le-
gal, did not afford any protection. Even worse, it gave the appearance of
protection, encouraging market participants to engage with these banks
as if they were safe. After the crisis, I mentioned this to a few acquain-
tances in supervisory agencies in other countries. They were aghast, all
saying that this sort of transaction would not have been allowed in their
country. Even if not explicitly forbidden, it would be seen as a violation
of the regulations’ spirit.
But if the regulators were too relaxed before the crisis, there is a danger
of going too far in the other direction after it has happened. The Ice-
landic regulator again shows this better than most. Ever since the 2008
crisis its main preoccupation has been to impose rules that would have
prevented the crisis, closing the barn door after the horse escapes. Ice-
land has some of the world’s highest bank capital charges and extensive
restrictions on what the financial system can get up to. Consequently,
financial services in Iceland are the most expensive in Europe, and it is
really hard to make the sort of risky investments so necessary to underpin
future growth. Instead, Iceland bet its fortune on tourism, an industry
that does not need a lot of capital, only to see it come crashing down with
the Covid-19 crisis in 2020.
The Icelandic regulators are hardly unique in failing the Goldilocks
challenge, even if they fell for it particularly hard. It has happened in all
jurisdictions and exemplifies a particular form of procyclicality. During
upswings, regulations become increasingly lax, amplifying the boom; af-
ter a crisis, they become excessively strict, magnifying the downturn. By
2020 the cry for fewer regulations was getting increasingly loud, and,
with Covid-19 everywhere, financial stability played second fiddle to eco-
nomic prosperity. Many countries already have relaxed capital standards,
and most signs point to even more relaxation. Unfortunately, the last cri-
sis episode always has an undue influence on how we think about future
crises. The policy makers are always at risk of falling for the successful
general’s syndrome, just like the French army in the 1930s.
take on too much risk, finance the all-important small- and medium-sized
enterprises, and at the same time not go bust. It is one of those things
that sound good in theory but are not so easy in practice. It is hard to fig-
ure out what the objectives would be and practically impossible to verify
whether a particular bank is complying.
Principle-based regulations require innovation, and if the authorities
try to be too innovative in their regulatory designs, they run the risk of
fostering spectacular regulatory failures. This does not mean that cre-
ativity in financial regulation, like principle-based regulations and light
touch and sandboxes or whatever the buzzword du jour is, is a bad idea.
The alternative can be just as bad and even worse, as the tick-the-box-
obsessed Icelandic regulator shows. So we need some balance between
the tick-the-box approach and abstract, principle-based regulations if we
are to meet the Goldilocks challenge.
After the 2008 crisis, the word—in the UK and in places that take its
lead—is conduct. We even have a Financial Conduct Authority in the
United Kingdom, the idea being that financial institutions should be-
have properly and avoid doing anything untoward. That means focusing
on individual behavior, with a special emphasis on compliance. After the
financial 2008 crisis, the LIBOR, foreign exchange, and terrorist financ-
ing scandals, financial institutions have beefed up their compliance func-
tions. The number of risk and compliance staff in HSBC in 2014 was
10 percent of its entire workforce. Every bank intensively monitors its
staff. Computers and senior managers read messages and emails, tele-
phone calls are recorded, and the banks are spending enormous efforts to
ensure that employees behave properly. Many senior managers must read
daily emails and messages from their underlings, selected by an artificial
intelligence engine. In his 2016 book Why Aren’t They Shouting? Kevin
Rodgers mentions the anecdote of a Russian quant working for him in
his Deutsche Bank office, saying, “It’s great. I was too young to be spied
on under communism, but now I have the chance to experience it under
capitalism. I guess you could say that’s progress.” Even so, banks can’t
prevent some employees from going rogue, and abuse is inevitable. When
the next scandal comes along, are we willing to ratchet up monitoring
of bank employees even further? Will the cost of doing so in both finan-
cial and civil liberties terms be acceptable? Even then, will that prevent
type of question she is going to get is, “You had all the information and
all the powers, why did you let the bankers exploit the public?”
The danger is that the supervisors’ incentives end up being focused
on the prevention of failure at all costs: the supervisor becomes too risk
averse. In other words, the supervisor’s incentive problem is the inverse
of that of the banker, and it is essential to have some mechanisms in
place to prevent excessive supervisory risk aversion. One way to do so is
through a cost-benefit analysis of regulations, easy to conceive of, hard
to do in practice. The opposite also frequently happens, what is known
as regulatory capture, whereby a supervisory agency no longer services
society but instead works in the interests of the regulated. The best recent
example is the American Federal Aviation Administration (FAA) and the
troubles of the Boeing 737 Max aircraft. The FAA seems to have out-
sourced most monitoring of the 737 Max to . . . Boeing. When foreign
regulators started to prohibit the 737 Max from flying, the FAA resisted. A
classic case of regulatory capture. Regulatory capture is all too common.
In Britain a recent case is the horsemeat scandal, in which food processors
got away with selling horsemeat as beef. The Food Standards Agency not
only failed to discover the substitution despite frequent inspections, but
also did not want to punish the guilty nor change practices afterward.
The banks are superb lobbyists, aiming to create banking regulations
favoring themselves while discouraging entry into the banking system.
And for good measure protect banks’ profits and ensure the odd bailout.
If the supervisors don’t play ball, the banks just go directly to the politi-
cians in charge of the supervisors. If that wasn’t enough, governments
have many other reasons for regulating besides keeping the financial sys-
tem running efficiently. Most, if not all, like to have “national cham-
pions”—the reason we have the systemically important financial insti-
tutions (SIFIs)—and that political desire gives the banks considerable
power over the regulators. The government may have altruistic ulterior
motives, such as requiring banks to provide unprofitable banking services
to disadvantaged sectors of society. Charles Calomiris and Stephen Haber
argue in their book Fragile by Design: The Political Origins of Banking
Crises and Scarce Credit that the political objective of providing housing
for low-income families in the United States led to an unholy alliance
between left-wing advocates of the poor and the bankers, preventing ef-
fective regulations and thus creating fertile ground for the 2008 crisis.
They further argue that some countries suffer an excessive number of cri-
ses because bankers and politicians join forces to create a financial system
favoring well-connected special interests at the expense of society.
good for the economy if it leads to more innovation. The role of the
government should be limited to managing the bankruptcy process. That
doesn’t stop it from interfering, and the government routinely bails out
corporations, like the United States’ bailing out of General Motors in
2009 at the cost of $11 billion. While there might be good political rea-
sons for bailing out carmakers, there is little economic sense in doing so.
Failure is an essential part of the capitalist economy. Inefficient companies
fail, and other, better ones take their place—Joseph Schumpeter’s cre-
ative destruction.
While the economy can quite easily cope with even the largest corpora-
tions’ failure, it is not the same with banks. Modern society cannot func-
tion without continuous banking services. We depend on debit cards to
pay for lunch every day, and companies need to pay employees and sup-
pliers on Fridays. It all goes via banks, so any hiccups in the services pro-
vided by banks are quite disruptive to society at large. While it is relatively
straightforward to transfer control of failing corporations to the creditors,
it is not the same with banks because their business is moving money. In a
bankruptcy, ownership of money and obligations need to be clear. It can
take a long time to establish claims; Lehman Brothers is still in litigation
a decade and a half after its failure.
It gets worse. Bank failures are contagious. The only reason a bank stays
in business is because its clients believe in its solvency and permanency.
If the clients lose that faith, they will all rush to withdraw their money or
terminate their business relations—a bank run. The social cost of a bank
failure vastly outweighs the money lost in a bankruptcy because any dis-
ruption in the provision of banking services directly hits everybody. The
result is what we economists call externality. The private cost or benefit is
dominated by the cost or benefit to society at large.
Bank bailouts also create moral hazard. If the government steps in
when banks fail, the banks are likely to take on more risk than they other-
wise would. Unfortunately, the moral hazard from bank bailouts is dif-
ferent from that in insurance because it is usually not possible to charge
those receiving bailouts for the privilege. After all, the bank is already
failing, and we have to give it money: there is no point in kicking it when
it is down. So why not charge them an insurance premium that pro-
tects the government in case it has to give a bailout? Easy in theory, hard
in practice. The reason is that the insurance premium should reflect the
bank’s riskiness, but if it was so risky why didn’t we prevent it in the
first place? Besides that, it isn’t that easy to measure the risk in the first
place. If we then charge a fixed insurance fee, we are merely punishing
the prudent.
The hardest decision to get right in the Goldilocks challenge is bailouts.
Bailing out banks means rewarding the very entities that got into trouble
in the first place, but not bailing them out can cause a much more costly
crisis. All is not lost, and there are ways to do bailouts right. A bailout is a
transfer from one part of the population to another—someone pays and
another benefits. How to determine the winners and losers is a political
question, why the government, not a bureaucrat, has to decide to do a
bailout. Once a bailout is decided on, who should do it? The central bank
or the ministry of finance? The central bank might be ideal because it can
print money on demand and so cannot run out of money. However, there
is no such thing as a free lunch. The consequence of monetizing bailouts
is inflation, and the cost is borne by people on fixed incomes, like retir-
ees. That might seem somewhat academic now that the central banks can
apparently print all the money they want without consequences. But the
central question remains: Even if the central banks could run the print-
ing presses at full speed 24/7 without worry, who should benefit? Do we
build a hospital or school, lower taxes, or do a bailout with the freshly
printed money?
To get a handle on the question of bailouts, I came up with a classifica-
tion scheme for bailouts in my book Global Financial Systems. To start
with, some bailouts should be done by the ministry of finance and others
by the central bank. The central bank should do the bailout in liquidity
crises, where everybody is clamoring to convert liquid assets into cash—
as in 1907, 1914, and 1866. In a liquidity crisis, the market for even the saf-
est of assets disappears, as when investors went on strike in the autumn of
2007. A solvent bank with tangible assets might fail simply because it can-
not sell them at a reasonable price to meet immediate demands for cash.
Suppose the crisis was not only about liquidity but also solvency, as in
2008. The banks made bad decisions, some of their loans are worthless,
and they are facing bankruptcy. The banks’ difficulty is caused by their
misbehavior, so they are not merely the victim of the crisis. A liquidity
For taxpayers, option 5 is the best, followed by 4, all the way to the worst,
1, since the taxpayer gets the upside if things go well. The banks, of course,
see it differently. They prefer the taxpayer to absorb all the losses and will
lobby for debt guarantees and, barring that, the government assuming
bad debt. The last thing they want is equity dilution or preference shares.
So which outcome prevails? It depends on a bank’s power. If it can force
option 1 it will do so, whether via lobbying, scaremongering, or bribery.
Perhaps the same family that owns the bank runs the government? It’s
been known to happen.
In the 2008 crisis the European banks were generally successful in forc-
ing the government’s worst options, 1 and 2. Why? Scaremongering and
the authorities’ inexperience. The banks knew what was happening, and
the government was not prepared, so the banks maneuvered the authori-
ties into their preferred option. Ireland made the worst choice, guaran-
teeing bank debt and earning a sovereign default for its trouble. Europe
since then has tried to find better outcomes for the taxpayer. The Spanish
government brilliantly managed to implement option 5 in the summer
of 2017 when it resolved Banco Popular and sold it to Santander for one
euro. It is not easy to be so ruthlessly efficient, as is shown by the Euro-
pean country with the highest number of bank failures in recent years,
Italy. Their politics gets in the way, as in the resolution of the world’s
oldest bank, Banca Monte dei Paschi, in 2017. It had been failing slowly,
and anybody following Italian banks knew it was only a matter of time
until it failed. Unfortunately for the Italian taxpayers, the government
vacillated while the costs mounted, eventually deciding to bail it out at a
cost of €5.4 billion.
The Monte dei Paschi case and many other bank failures in Italy and
elsewhere illustrate how easy it is to take advantage of the taxpayer. The
reason is shown in The Wealth Effect: How the Great Expectations of the
Middle Class Have Changed the Politics of Banking Crises by my LSE
political science colleague Jeff Chwieroth and his Australian coauthor,
Andrew Walter. They argue that bailouts have become a middle-class
good. Rescuing banks benefits the middle classes, which then lobby in
favor of bailouts, which is why the Italians continue to spend money they
can ill afford to bail out their banks.
It is easy for the financial authorities to adopt a principled position
when nothing is happening, proclaiming they will protect the taxpayer
and minimize moral hazard. It is much harder to stick to it when push
comes to shove, as the Argentinians learned in 1993 and the Italians to-
day. The political pressure is enormous. Perhaps the investors are pension
funds or grandmothers as in Italy, and by having them take a hit the eco-
nomic and political consequences will be severe. The temptation is always
to give in: the taxpayers can surely afford it.
Goldilocks enters the Bear family’s house, tasting the porridge of Papa
bear, Mama bear, and Baby bear. She finds Papa bear’s porridge too hot,
Mama bear’s too cold, and baby bear’s porridge just the right tempera-
ture. The Goldilocks challenge is to find the appropriate balance between
too hot and too cold. The most important and the most challenging part
of the regulators’ job is the Goldilocks challenge. Regulate too much,
and the economy doesn’t grow; regulate too little, and we get damaging
crises. We need the right regulation intensity.
It is not easy. Of all the human activities we try to regulate, the finan-
cial system is by far the hardest to control. The incentives of the regula-
tors and bankers do not align well with society’s interests. The mission’s
complexity means it can be difficult to avoid regulators who fall into one
Over the past decade, G20 financial reforms have fixed the fault lines that
caused the global financial crisis.
—Mark Carney (2017)
prudent. Surely, then, no bank will fail, and financial crises will not hap-
pen, just as there would be very few car crashes if everybody followed the
speed limit, nobody texted while driving, nobody drove drunk, every-
body obeyed all the traffic laws, and we all had Volvos.
Can regulations prevent financial crises? The evidence from the years
after World War II, the Bretton Woods era of 1944 to 1972, superficially
suggests yes. The financial system was then very heavily regulated, and al-
though banks failed, only two banking crises are recorded anywhere in the
world over those twenty-eight years. But the financial system was almost
entirely national, with little cross-border banking and very expensive finan-
cial services. Perhaps suitable for its era, but not the twenty-first century.
The world changed. After the collapse of Bretton Woods in 1972 and the
emergence of the Washington Consensus, banking became global. Regu-
lations followed. But the new regulations were different from the Bretton
Woods ones. The focus was no longer on tight controls; instead, we aimed
to control risk. Making sure the banks were prudently run—Volvos.
It was all fine for a while, but then in 2008 banks started to fail for
completely unexpected reasons. That should not have happened since
Volvos are safe. The problem was that the prevailing regulatory thinking
focused on each bank’s behavior in isolation. What the authorities missed
was the importance of the system. A financial system composed entirely
of prudent banks is inherently unstable. We now know there were plenty
of hitherto unknown risks, especially liquidity risk, so fundamental to the
2008 crisis. The system is not merely the sum of the individual banks.
But financial regulations still mostly focus on each financial institution in
isolation, using the riskometer as their main tool.
An interesting case study of what this means in practice happened on
Thursday morning, 15 January 2015. That day Switzerland’s central bank,
the Swiss National Bank (SNB) did something that was either quite ex-
traordinary or inevitable, depending on how one thinks about risk. The
Swiss decided to allow their currency, the franc, to float freely. For some
years before that, the SNB had pegged the franc to the euro at 1.2 francs.
Immediately after the franc was set free, it strengthened by 16 percent.
The SNB decision really should not have been that surprising. Those
who followed the Swiss economy and local politics—especially those who
knew who owns the Swiss National Bank—expected the exchange rate
peg to break. It was not a matter of if but when the franc would be set
free. The roots of the announcement grew when Switzerland’s economy
began outperforming its eurozone neighbors. Before the global crisis
started in 2007 the euro was a success, a stable currency investors wanted.
The crisis put paid to those thoughts: investors holding liquid funds and
seeking safety began looking for new destinations. But there are not that
many stable, well-run countries in the world. Switzerland is one of the
few. Unsurprisingly, money poured into Switzerland, and, as in any mar-
ket, when there are more buyers than sellers prices go up.
The first time the franc and euro traded in January 1999 the exchange
rate was 1.6 francs to the euro, and it was almost in the same place when
the global crisis started in June 2007. From that day on, the euro steadily
fell, reaching a low of 1.05 francs in August 2011. Though nice for Swiss
consumers, the fall was not all that good for Swiss exporters, and the SNB
did what so many other central banks like to do in that situation: fix the
exchange rate, in their case at 1.2.
It is easy, at least in principle, to prevent a currency from rising. Just
print money and use it to buy foreign currency on the open market. The
problem is that when Switzerland prints money to buy euros, it is infla-
tionary since the money supply is expanding. To prevent inflation, central
banks like to counteract the inflationary forces by sterilization, soaking
up the freshly printed money by selling bonds. That works because if
the central bank buys one thousand euros’ worth of foreign currency by
printing twelve hundred francs and then sells a bond worth twelve hun-
dred francs, the amount of money in circulation does not change. So far,
so good. But the central bank eventually runs out of bonds to sell and
can no longer sterilize. Meanwhile, it is piling up paper losses. Not only
does it forgo interest payments on the bonds it sells, but also if the cur-
rency eventually floats, the central bank will incur a substantial loss on its
foreign currency holdings.
Central banks are rarely structured as corporations, but the SNB is:
60 percent of its shares are owned by public institutions, like regional
governments—the cantons—while the remaining 40 percent trade on
the open market. The cantons depend on dividends from SNB, so they
are naturally unhappy about the central bank running a big loss-making
position. So, even though the SNB supposedly operates independently of
its shareholders, that policy is not true in practice. The SNB has to meet
two objectives simultaneously: make money for its shareholders while
also helping the economy. A currency peg makes those two objectives
mutually exclusive. On the last day before the announcement, the SNB
held 471 billion francs’ worth of foreign currency. The size of the Swiss
economy, its GDP, was 565 billion francs. The SNB lost 78 billion francs
on 16 January. If it had waited even longer to abandon the peg, the even-
tual losses would have been higher. What does any of this have to do with
Basel or the risk theater? Suppose we had gone to bed on 14 January 2015
and used the main Basel-approved riskometers to forecast the likelihood
of a 16 percent appreciation of the Swiss franc (Table 1).1
The first two riskometers, EWMA and GARCH, found the likelihood
of that happening so low that it could not be calculated. For the third,
MA, the likelihood was smaller than once in every universe. For the fourth
riskometer, t-GARCH, it was once every 14 million years, and for the
fifth, EVT, once every 109 years. The riskometers were quite inconsistent.
But then, did these five riskometers pick up on what had happened and
forecast risk better after the appreciation? For one, t-GARCH, the risk of
another 16 percent appreciation increased 104 times, while for another
(EVT) it went up only by 14 percent. Which of these two is more likely
to be correct? Neither. After the currency had appreciated by 16 percent,
it was implausible it would see another appreciation of the same magni-
tude. It is like a dam bursting: we don’t expect it to burst again for some
time. The risk went down, not up, as the riskometers had it. It is not
like I specially picked lousy riskometers; on the contrary, these five are
state-of-the-art, recommended by the authorities, and used by industry.
Table 1. Riskometers (VaR) and the likelihood of the Swiss currency appreciation
Two, t-GARCH and EVT, are especially lauded for their ability to pick
up tail risk.
When I was drafting my blog on the riskometers and the Swiss an-
nouncement, I sent a copy to Robert Macrae, a friend who ran a hedge
fund, and asked him to comment on it. He responded by saying that my
findings were not all that interesting because to a currency trader us-
ing such riskometers to determine risk and nothing else was naive. Any
market participant would use a hybrid approach, combining a riskometer
with a detailed study of the Swiss economy. A comment I have heard many
times: It is not fair to criticize riskometers merely because they failed to
predict the likelihood of the Swiss currency appreciation. However, Rob-
ert ran his own fund, and the authorities had no say in how he managed
risk, so he could pick any riskometer he wanted. A regulated bank does
not have that luxury. The authorities push banks to use a riskometer and
not just not any riskometer: banks can choose only among a few, and as
time passes they are increasingly told precisely how to measure risk.
I have presented the Swiss currency appreciation case many times to
audiences in both the private and public sectors. Those in the private
sector tend to react by saying something like, “So what? We know this
already.” In other words, they know that the risk measurements used to
regulate banks are very unreliable. I find it especially amusing to hear
that from the very risk managers whose job it is to report on Basel to the
authorities. They have to use the officially sanctioned riskometers, even
though they have no faith in them.
The reaction from regulators is more interesting. They often respond
by saying that while the example holds true, it is irrelevant. The riskome-
ters in Basel III are designed for microprudential regulations, the man-
agement of day-to-day risk, not for extreme outcomes, so the Swiss cur-
rency appreciation is not a fair test of them. The regulators are expressing
a nuance that is generally lost on the rest of the world. After all, there is
more to microprudential regulations than day-to-day volatility; measur-
ing the likelihood of banks failing is key to their mission, and their pre-
ferred riskometers don’t capture such risk.
The use of riskometers in regulations can have curious consequences.
The BBC Panorama program came out with a provocative episode in
2014 titled “Did the Bank Wreck My Business?” about how two banks
bailed out by the government, RBS and Lloyds, were unfairly destroying
small companies. What the BBC missed was why the banks behaved this
way. They were merely doing what they were supposed to do under the
post-2008-crisis Basel regulations—derisk. The risk theater is to blame
for the destruction of the small companies, not the capricious bankers, as
Panorama argued.
The reason is because the global crisis in 2008 showed that the global
financial regulations in place at the time, Basel II, were not up to the task.
It was just too easy for banks to manipulate their capital—exploit capi-
tal structure arbitrage. The Basel committee wasted little time in com-
ing up with its successor, Basel III, mostly in place by now. It happened
with lightning speed, taking only a decade—yes, ten years is super fast
by international regulation standards. That haste meant the committee
did not have time to make any fundamental reforms. Basel III is only
an incremental improvement on its predecessor, chipping away some of
the most glaring deficiencies while leaving the underlying philosophy in
place—Basel II on steroids, if you will. There is much talk about “Basel
IV” addressing all the difficult issues, but I suspect regulatory fatigue may
get in the way.2 Basel III remains microprudential for most parts, leaving
the system’s stability to the newly established macroprudential regulators.
The primary focus of Basel III, like that of its predecessors, is capital.
The 2008 crisis showed that the way capital was calculated was problem-
atic because many banks with supposedly high levels of capital failed.
Basel III’s main thrust is in increasing both the amount and the quality
of banks’ minimum capital: the reason the banks discussed in the BBC
Panorama program had supposedly been wrecking businesses. Because
the banks have to hold much more capital for the riskiest types of loans,
the inevitable consequence is that the availability of loans to small- and
medium-sized enterprises sharply decreased while the interest rates on
those loans increased significantly.
The regulators are proud of their creation. Banks now hold more
capital, have more stable funding and robust systems for managing risk,
promising a more resilient banking system, one that stimulates economic
growth. All these reforms left the banks in much better shape for dealing
with the Covid-19 crisis in 2020. The banks complain, but they complain
surprisingly little and are much happier with Basel III than is often as-
sumed. Compared with the very loud protests made by insurance compa-
nies and asset managers over being designated as systemically important,
the banks appear to be outright acquiescent. In and by itself, worrying.
There is much to like in Basel III. Still, I have reservations. To begin
with, Basel III still assumes the stability of the financial system is ensured
if each bank is a prudent Volvo. Furthermore, Basel III not only fails to
address procyclicality adequately, but also makes the problem worse. Al-
right, the standard response is “Basel III has a new type of capital buffer
that is adjusted countercyclically according to the financial cycle.” Not
quite, as the adjustment buffers are small, and a temporary relaxation
may not encourage them to lend more. The Covid-19 lowering of the
capital ratios did not stimulate lending. More fundamentally, Basel III
implements its predecessor more intensively but does not ask the critical
question, What do we need from financial regulations? Yes, it does ad-
dress things like financial stability and the provision of services. Still, it
does not position itself properly within the context of what we want from
the financial system and its regulations. So, yet again, risk theater. Perhaps
worst of all, it mostly ignores the SIFIs.
The SIFIs are those systemically important financial institutions whose
failure will cause a systemic crisis. Before 2008 nobody worried about the
SIFIs; on the contrary, the prevailing wisdom maintained they were the
safest of institutions. Globally diversified financial institutions that could
offset losses in one area with profits in all the others. Well, that turned out
to be wrong. The largest banks did fail. Worse still, they failed with cata-
strophic consequences, like the world’s biggest bank at the time, RBS-
ABN AMRO, not to mention Lehman Brothers.
One might ask why we allow financial institutions that are so dan-
gerous. The answer is simple: politics. Even if the financial authorities
preferred no SIFIs, their political masters overrule them. In other words,
SIFIs exist only because politicians want them to exist. Why do politi-
cians like SIFIs? The straightforward reason is that they are quite handy.
A single financial institution that can provide financial services wherever
your country’s companies operate is beneficial. If a German company
based in Düsseldorf has businesses in America, Brazil, and Korea, it helps
to have a single bank that can service it in all these places. The other
reason SIFIs exist is prestige. If your bank has your country’s name on
it—think Deutsche Bank—of course you want your bank to be big and
powerful. Besides, having global banks in your jurisdiction allows you to
project power. Austria gains power and prestige by having the regional
SIFIs in its jurisdiction. The reason there are so many SIFIs in insecure
midsize European countries (Figure 30).
So, how dangerous are the SIFIs? To answer that, I need to go back to
bank capital. The numbers reveal that the SIFI problem is much worse for
smaller countries (see Figure 30). The total assets of the largest American
bank, JP Morgan, are only 13 percent of the GDP of the United States,
while the total assets of the Swiss UBS bank exceed its GDP at 136 per-
cent. One can use many ways to look at how vulnerable these SIFI banks
are. The most obvious is the leverage ratio, the ratio of capital to total
bank assets. The lowest ratio is Deutsche Bank’s at 4 percent, while JP
Morgan’s exceeds 6 percent, and the Chinese ICBC’s is almost 8 percent.
No wonder Deutsche is always in the news, and not in a good way. It is
the SIFI bank the pundits think will fail next.
The Basel III financial regulations say that a bank has to have a leverage
ratio of at least 3 percent. So, by subtracting three from the leverage ratio
Figure 30. SIFI size and GDP. Credit: Copyright © Ricardo Galvão.
and multiplying it into total assets, we can see how big a loss the bank
would have to suffer in order to be forced into bankruptcy—the distance
from default (Figure 31). It would take a loss of only $14 billion for the
Royal Bank of Canada to be shut down and $15 billion for the Dutch
ING bank. Meanwhile, it would take $89 billion for JP Morgan and over
$200 billion for the Chinese ICBC bank.
The failure of any bank whose assets are a significant portion of GDP
will be tough to resolve. Of these banks, the United States could easily
absorb even the largest losses arising from JP Morgan’s failure, and China
would not find it too hard to handle ICBC’s default. However, it would
be challenging for other countries, especially Switzerland, the Nether-
lands, and Spain, where the largest bank’s assets exceed GDP. It is much
better to try to prevent failure by containing the size of the largest banks.
ing our way. I often hear words to the effect that macropru at present is
how monetary policy was in 1950—give it fifty years and it will become
as sophisticated. One can do a lot of damage in fifty years, and those who
know their monetary history will recall that the stagflation of the 1970s
was in no small part due to the poor monetary policy of the era. The fail-
ure to hit inflation targets since 2008 shows that monetary policy is still
far from perfect.
There are many directions the macropru authorities can take. Most
are passive, focusing on fixed rules that hold through the financial cycle.
Many such macropru implementations have been very successful in pre-
venting crises—my favorite is the ban on buying stocks on margin in
the United States in 1934. (Should I be worried about having a favorite
macropru rule?) The alternative is active macropru, leaning against the
wind in a discretionary manner. If risk is building up, tighten capital and
liquidity standards. Relax when risk and growth are low, as we did follow-
ing the Covid-19 crisis. If the market is too risk averse, follow Keynes and
encourage it to take more risk. Such discretionary macropru policies are
designed to be countercyclical, dampening out the financial cycle. Active
macropru demands much more of the financial authorities than passive
macropru. They need estimates of systemic risk and its impact on the real
economy, from the early signs of a buildup of stress all the way to the post-
crisis economic and financial resolution. They need tools to implement
effective policy remedies in response to changes in risk. And the authori-
ties need legitimacy, a reputation for impartiality, and political support.
To start with, systemic risk needs to be measured, not an easy job.
There are many indicators of systemic risk out there, such as the Euro-
pean Central Bank’s CISS. Most are prone to what statisticians call type I
and type II errors: falsely finding something that is not there or failing to
detect something that is. Not a problem with an easy solution. Systemic
crises are infrequent, less than once every forty-three years on average,
giving the statisticians little to work with. To complicate matters, the
financial system’s structure will be very different from one crisis to the
next. Having identified that systemic risk is increasing, the authorities
have to respond with one of the tools at their disposal. There are many
tools in the macropru tool kit. Some are meant to limit procyclicality—
lean against the wind—some shield the real economy, yet others are used
to respond to imminent threats. Some tools are surgical and specific, like
loan-to-value ratios aiming to limit the amount of money people can bor-
row when buying a house. Others are blunt—sledgehammers—like bank
capital ratios which affect all bank activities. The blunt instruments may
kill the patient, while the surgical ones may not work.
Frustrating the job of the macropru designers is the continuing evolu-
tion of the financial system. The past informs the tools, but the threats
come from the future, like driving by looking into the rearview mirror.
Meanwhile, the impacts and side effects of the tools are poorly under-
stood. The most visible macropru problem, and politically the most im-
portant, is real estate, a common cause of financial crises. The macropru
policy makers are always on the lookout for real estate bubbles. Simple
things, bubbles. We borrow from banks to buy homes, and in response
prices go up and the economy blossoms, encouraging more people to
borrow to buy homes. Everyone feels happy in the short run, but over
time fault lines emerge and a crash becomes increasingly likely. Both
the bubble itself and the eventual crash create problems. Rising housing
prices directly affect inequality. Homeowners get richer, and the rest are
left out, with political consequences. Governments can be forced to im-
plement policies that further stimulate housing prices—like the various
policies helping first-time buyers and high-risk borrowers. The political
desire to help poor households acquire property in the United States was
the main reason for the emergence of the subprime mortgage market, a
leading cause of the crisis in 2008. Real estate is one of the most common
causes of banking crises. Elementary. Has to be solved, right?
Unsurprisingly, the macropru authorities have identified real estate as a
significant priority. So what are they to do? One of the main tools in use
today is the loan-to-value ratio, whereby one can borrow only a certain
percentage (say, 80 percent) of the value of a house. However, while real
estate is undoubtedly a macropru concern, the remedies deal only with the
symptoms, not the causes. House prices are directly affected by economic
growth and various government policies, like zoning laws, help-to-buy,
tax deductible mortgage interest, ultralow interest rates, and subsidized
mortgages for high-risk borrowers. Macropru has no impact on any of
those, and all the macropru authority can do is to mop up after the other
policy domains. Meanwhile, just using the macropru tools will expose the
seller of CDSs via its London-based and France-regulated bank, the risk
of which was missed by its New York State regulators.
The financial authorities now focus on the systemic threats emanating
from insurance companies and asset managers, institutions that control
trillions of dollars of assets with relatively little oversight and, as AIG
showed, can make business decisions that threaten global financial stabil-
ity. Both industries are fragmented, with nothing akin to globally coor-
dinated banking regulations, and, until 2008, nobody thought either in-
dustry was systemic.4 While insurance companies and asset managers may
pose systemic threats, the nature of the threat is poorly understood, and
until very recently there had been no studies of how these two industries
interact with the rest of the financial system and the threats that poses to
global stability. We have studied the systemic fragility of banks for over a
century. There is no comparable research on insurance companies or asset
managers. When all you have is a hammer, everything looks like a nail,
and when the FSB started looking at the systemic risk in the nonbanking
parts of the financial system, all they had was analysis of banking stability.
The type of work I have discussed extensively in this book: bank runs, fire
sales, liquidity dry-ups, and, most important, capital.
That was the reason I was invited to the conference at the insurance
regulator. The FSB opted to analyze the fragility of insurance compa-
nies and asset managers by looking through the lens of banking fragility,
which meant capital. But their risks are quite different. What kills insur-
ance companies is writing insurance contracts too cheaply so they can’t
meet their eventual obligations. There is nothing systemic about that
because most insurance payouts are uncorrelated with systemic financial
risk. The AIG crisis happened because it decided to become a bank, and
the relevant authorities did not pay attention. Asset managers use neither
much leverage nor derivatives. They are systemic consequences of asset
managers, but they are in exchange traded funds with illiquid assets like
small company bonds. Capital has nothing to do with that.
So, when applying bank fragility analysis to asset managers and insur-
ance companies, the financial authorities just end up saddling them with
unnecessary burdens which the clients—us—pay. We might get the per-
ception of safety but not actual safety: risk theater. The more funda-
mental danger is that if we increasingly harmonize regulations around a
Basel-type philosophy, including not only banks but also other activities
in the financial system, like insurance, asset management, and nonbank
banking, we get monoculture.
I recently gave a presentation at a central bank conference and made a
throwaway remark that macropru could be procyclical. That is, it could
perversely amplify the financial cycle instead of dampening it. To the au-
dience, that was heresy, and some senior staff members got cross with
me. After all, the fundamental promise of macropru is that it is counter-
cyclical, dampening out the natural cycles in the financial system. So,
could macropru be procyclical? Well, yes. I eventually cowrote a blog on
this, “Why Macropru Can End Up Being Procyclical,” arguing that dis-
cretionary macropru—leaning against the wind—has considerable scope
for amplifying the financial cycle. Suppose the macropru authorities were
successful in smoothing out the financial cycle. Would market participants
respond to this gratefully and say, “What a great job the central bank is
doing?” No, the market would see the resulting low risk as an invitation
to take more risk: the Minsky effect. We have seen many examples in the
past, like the Greenspan put.
Another reason macropru may be procyclical is the difficulty in measur-
ing risk—riskometers aren’t exactly reliable. Figure 32 shows a hypotheti-
cal time path of risk over one year. The target risk is three. In the first
month the risk is too high at five, but nobody realizes it yet. A couple of
months later the riskometers pick up on risk having been high, alerting
the authorities, who start planning their reaction and, a few months later,
decide on what to do. Eventually, in month twelve, the policy response
is implemented. Meanwhile, risk has been steadily falling and is already
below the target by the middle of the year. By the time the policy inter-
vention takes place in month twelve, risk is too low. Instead of bursting a
bubble, it exacerbates the derisking already taking place, and risk crashes
to one, way below the target level of three. A problem caused by reacting
with a time lag to indicators of risk that are themselves measured with a
time lag: the policy response can come too late and be procyclical.
By the end of the year the appropriate policy response would have been
to increase risk to stimulate economic activity, not to decrease risk. Such
a procyclical policy response is common. A recent example was Japan in
authorities wield enormous power, just like the planning ministries of the
Soviet Union, but may not be any better at getting the result they desire.
Common beliefs and action are the enemy of financial stability. The best
way to get stability is diversity, both in regulations and financial institutions.
The most popular banana in the nineteenth century was the Gros
Michel. It tasted good, came in big bunches, and had a bruise-resistant
peel. Ideal for export. But the Gros Michel banana had a vulnerabil-
ity. Each banana was a clone of every other, so they were all genetically
identical. It came as no surprise when the fungus Fusarium oxysporum,
189
Figure 34. Lowering volatility and fattening the tails. Credit: Lukas
Bischoff/IllustrationX.
rules for everybody. And if the rules are about specific conduct, like all
the micro risks the microprudential regulators concern themselves with,
we get harmonization of action. The financial industry also pushes for
monoculture. Almost every financial institution has the same objective:
maximizing profits subject to constraints. That takes them closer to other
institutions’ optimal asset mix, leading to more crowded trades and
procyclicality.
While there are not many solid measurements of diversity, one metric
points in that direction: the number of banks is steadily falling. Half a
century ago we had a wide variety of financial institutions. Every coun-
try had its own financial regulations, and they were all different. Each
country had many types of banks, regulated and operated differently, all
driving diversity and stability. The most diverse of all was the United
States. It used to leave banking regulations to the states, and, of course,
they were all different. Banks in Arkansas operated under rules that were
different from those of banks in New York, resulting in herd immunity.
Per the Glass–Steagall Act banks were not allowed to operate across state
borders. Then the rules were relaxed, banks could operate across state
lines, they were increasingly federally regulated, and state regulations be-
came harmonized. One consequence of these changes is that the number
of banks has fallen sharply. The federal reserve bank in St. Louis has col-
lected data on the number of banks in the United States from 1984, when
there were 14,400 commercial banks. By 2020 the number had fallen to
4,404, a reduction of 3.3 percent a year.1
Sam Peltzman, an economics professor at the University of Chicago,
came up with a radical idea in 1975: risk compensation. When regula-
tions are enacted to improve safety, the unintended consequence is more
risk. Take American football. At the beginning of the twentieth century
football players started to demand helmets to protect themselves against
injuries. In the beginning, helmets reduced injuries, but over time, as
the helmets became better and better, the way of playing changed. Play-
ers started using their heads as ramrods, running headfirst into the op-
position. Because helmets improved, the shocks to the head became
more violent, and injuries increased—the players compensated for being
protected by taking on more risk. The vicious feedback between better
helmets, more aggressive playing, and injuries showed how the laudable
objective of protecting players by making them use helmets could per-
versely harm them. In the sister sport of rugby, heads are unprotected,
but because the players don’t use them as ramrods, head injuries are much
rarer than in American football.
Risk compensation is particularly pernicious in the financial system,
driven by common beliefs, that is, everyone observing the world in the
same way. Blame the riskometers. If everybody uses the same one, we all
end up seeing the world in the same way. Financial institutions obviously
don’t all use the same riskometer, as they are a part of the secret sauce that
makes some better than others. There is, however, a limit to how different
the riskometers can be, especially in banks. Industry trends directly influ-
ence even the largest banks. Their modelers studied in the same universi-
ties and read the same academic papers. They go to the same conferences
and move freely between banks. Not to mention regulations, which also
make risk management practices more homogeneous. All pushing banks
toward similar ways of modeling risk, so they increasingly see the world
the same way.
Beliefs are only half the picture. The other half is common action,
also driven by riskometers. Half a century ago, before the widespread
use of statistics and risk management and regulations and best practices,
every financial institution was quite free to do what it wanted. They were
primarily partnerships and allowed to make the dumbest decisions. Not
under the title “Portfolio Selection.” His critical insight was that inves-
tors should focus on two variables: expected returns and the variance of
asset returns, and it has been known as the mean–variance model ever
since. Markowitz’s thesis was controversial from the very beginning,
and he almost did not graduate. Milton Friedman felt that the mean–
variance model contained too much statistics and not enough economics
and therefore did not deserve a PhD in economics. Even so, it earned
Markowitz the Nobel Prize in Economics in 1990, as his work was by then
the cornerstone of financial economics.
Markowitz formalized an ancient idea: the more risk we take, the higher
the return we demand. His insight was to reduce a complicated decision-
stands are high risk. The 1996 amendment upended all of this, introduc-
ing the internal ratings-based (IRB) approach. IRB meant that the most
sophisticated banks got to use a riskometer to identify the risk of each
of its activities, like determining bank capital, what JP Morgan wanted
when it proposed Value-at-Risk two years earlier. The timing of the 1996
amendment is not coincidental.
Practically every financial institution followed the lead. Academics
started churning out papers on risk management, universities created
courses in risk management and risk forecasting, and risk consultants
pushed into every financial institution. A wave of optimism hit the finan-
cial risk management community. It looked as if financial risk had been
successfully reduced to an engineering problem, Kaizen-style—financial
risk management became scientific. Just like structural engineers design
safe bridges, financial engineers create safe banks and financial systems.
I am as guilty as anybody. I paid for my house in London by teaching
executive education courses on implementing Value-at-Risk techniques.
And have written a book called Financial Risk Forecasting. One of my LSE
masters courses is called Quantitative Methods for Finance and Risk Anal-
ysis, all about implementing the techniques in my risk-forecasting book.
While the first phase of financial engineering was about theory and
the second statistics, a significant problem was left unaddressed. Risk-
ometers are complicated beasts. They want to be fed with a lot of com-
plex financial data, and although there is a lot of data in finance, it is not
easy to work with. Databases are inconsistent, full of errors, and much of
the data is incredibly complicated, with arcane, inconsistent conventions.
There can be multiple standards for measuring the same thing. To this
day there is no single surefire way to identify a particular financial institu-
tion in this sea of data. Suppose two financial institutions trade with each
other. Even if both report the trade to the authorities, it may be impos-
sible for the authority to match the trades because of the lack of uniform
trade identifiers.
We need a central place where financial institutions can access all the
data they need, cleaned and synchronized. They can feed in their posi-
tions, run ready-made models on the data, or, even better, have artificial
intelligence manage it all. The financial system version of Amazon Web
Services (AWS). I am a huge fan of AWS. Before I started using it I was
stock prices. I then estimate the Expected Shortfall with six of the most
common techniques and show the results in Table 2. The highest risk
reading is more than three times that of the lowest. Similar results would
obtain for other assets and times. For a lot more examples you can go to
my website, extremerisk.org, where I estimate risk every day.
Does it matter that the riskometers are so inconsistent? In theory it
shouldn’t. A good risk manager knows riskometers are imprecise. They
know the strengths and weaknesses of each, treating them as a portfolio
of methods, picking the best one for the problem at hand. For example,
among the six in Table 2, EWMA is really simple, and nothing can go
wrong when it’s implemented. It reacts quickly to changing information.
However, just like an overeager teenager, it can respond way too strongly.
HS is the opposite, calm and stable: like a seasoned bureaucrat, it never
gets ruffled. Which do you prefer? It depends. HS’s conservativeness is an
asset for day-to-day operations, but when a shock hits, you need a quick
reaction, and that is when EWMA can be invaluable. Then, there is the
dark horse of EVT, short for extreme value theory. Like a wise man who
has nothing to say about day-to-day occurrences but tells us what we
need to know about the extremes when everything goes wrong.
All the riskometers have their good sides, and a good risk manager will
be guided by her intuition and experience. Seeing EWMA is $20.2 and
GARCH $18.4, while HS is $64.7, tells her that short-term risk not only
has fallen quite a bit recently but also is still relatively low by historical
standards. EVT says long-term risk is not affected very much by Covid-19.
The risk manager knows each number tells a different part of the story,
and by using all of them she gets a much more complete picture.
The problem is that such subjective judgment may not be acceptable.
Where the professional risk manager draws important lessons from the
diversity of measurements, the uninitiated see a problem. It is the same
asset. How can risk be so different? You know, they are all measuring
the same true process—risk—and if they disagree, one must be accu-
rate and the rest wrong. And that is precisely the conclusion the Basel
committee and the European Banking Authority reached when they saw
the inconsistency in risk measurements. The solution then is to pick one
riskometer—of course, the best of breed—and mandate its use. So does
that matter? Yes, if every bank has to use the same riskometer, they always
see the risk in the same way—their beliefs get harmonized. The regula-
tors would respond by saying that the banks can choose their own risk-
ometers; yes, to some extent, but the wiggle room is really small.
There is an interesting consequence of mandating all banks to use the
same riskometer. Under current financial regulations, banks have to mea-
sure risk from proprietary trading by 97.5 percent Expected Shortfall. In
plain English that means risk is the amount of money they expect to lose
on the worst day in every two months. So, if a bank holds $1 million
worth of Amazon stock and uses the EWMA riskometer (see Table 2),
then the risk on the day I did the measurement was $53,000, that is, the
bank was expected to lose $53,000 one day every two months.
The law of the land is 97.5 percent Expected Shortfall. Does it matter?
Yes, for two reasons. The first is the question of whether the 97.5 percent
Expected Shortfall is the correct risk to measure. After all, it captures
the worst day every two months and says little about any other type of
event. Earlier, I gave the example of Mary, Ann, and Paul, who all invest
in Google but all have different objectives for their investment. Paul cares
about short-term fluctuations, Ann is concerned about big losses in the
next half-year, and Mary worries about the pension she will start draw-
ing in half a century. The 97.5 percent Expected Shortfall risk measure
provides no valuable information for Mary and her pension problem and
little for Paul and Ann. It would be unfortunate if Mary lived in one of
the countries mandating the use of Expected Shortfall for pension funds.
The more interesting question is what the 97.5 percent Expected Shortfall
does to the market.
Imagine the total amount of risk is a balloon and that the balloon
covers all possible outcomes. We first squeeze the balloon in one place
(Figure 36). What happens? The obvious. The place where we squeeze
narrows, and the risk pops out everywhere else. In the second figure,
three hands squeeze in different places, so risk is much more uniformly
managed and never balloons out. Because we target the same part of the
risk universe—the 97.5 percent Expected Shortfall—that type of risk falls.
But like a balloon, it ends up getting squeezed out elsewhere. And one
part that will get squeezed is the extreme left tail, the big, nasty events
that blow up banks and cause banking crises. Perversely, by mandating
the 97.5 percent Expected Shortfall, the financial authorities increase the
most dangerous types of risk. The law of unintended consequences. Yet
another example of lowering volatility and fattening the tails.
When the Basel committee and the European Banking Authority ac-
knowledged that riskometers give very inconsistent measurements, the
interesting thing is how they saw the problem and what they proposed.
They could have said, “Okay, this is a known problem, and we want to in-
corporate best practices in risk management into the regulation process.”
That means running multiple riskometers on the same assets and using
the strengths of each in meeting the objectives of the regulations. How
good risk managers work in practice. No. That is not what the authori-
ties said. They expressed concern and concluded that the inconsistency
meant that some banks were using low-quality riskometers. It was es-
sential to require banks to use only the best. In other words, we need
to find the best riskometer and force all banks to use that and only that
riskometer. Why would the authorities have come to that conclusion? It
is not that they don’t know better. Having talked to a lot of regulators,
I have concluded the answer has to do with philosophy. If regulation is
risk-based, we must measure risk accurately, meaning models should give
the same assessment of the risk of a particular exposure. Otherwise, risk-
based regulations, like the Basel Accords, are fundamentally unsound. As
the regulators see it, their mission depends on the one true model, even
if few would admit to that.
The risk philosophy has further interesting consequences because of
bank lobbying. If we make regulations risk sensitive—that is, set risk tar-
gets for banks and specify the methodological approaches as we do in the
Basel Accords—then the output of the riskometers has a material impact
on banks. Suppose we have two banks, call them A and B, with an identical
portfolio, perhaps $1 million in Amazon. If A uses the t-GARCH riskome-
ter and B GARCH, then A has a risk of $107,000 and B only $43,800.
Because minimum capital is three times the risk, just imagine how loudly
A will scream, doing everything in its power to switch riskometers.
It all ends with a race to the bottom, whereby every bank wants to use
the riskometer that gives the lowest risk readings. The only way to pre-
vent that is for the authorities to decide on which riskometers are accept-
able. We then get an iterative process between banks wanting to measure
risk as low as possible and the regulators reducing the banks’ freedom
to measure risk as they see fit. The resulting ratcheting between the risk
philosophy and bank interests can end only one way: the regulators end
up dictating how banks should measure risk. Convergence to the “one
true model.”
I think a lot of people would react to this by saying, “So what!” We
are adopting best practices, the best riskometer, which means we will
measure risk in the best possible way. That might be correct if we look
at each bank in isolation and if they are small. However, for larger banks
and especially for the financial system in its entirety, there are at least
three serious consequences: procyclicality, transfer of responsibilities, and
ossification.
Start with procyclicality. I’ve spilled a lot of ink on it in this book and
so will be brief. If everybody ends up using similar or identical riskome-
ters, we harmonize beliefs and action. Everybody reacts in the same way
to shocks, all buying and selling the same assets, amplifying the cycles:
procyclicality and systemic risk.
The second problem is more insidious. The more we transfer responsi-
bility to the financial authorities, the more obligation they have for ensur-
ing things go well. Suppose the authority is in charge of how risk is mea-
sured; then, when the next crisis comes along, the critics will say, “Why
did the authorities pick such a stupid way to measure risk?” The banks
will say, “We did nothing wrong. We followed the guidelines coming
from the authority. Give us a bailout.” So, because the regulators end up
being the risk modeler for the banking system, they are also responsible
for its well-being, and hence to blame for large losses.
The final problem is that even if the authority manages to pick the best
riskometer of all possible worlds—the one riskometer that will not fail—
it will still end up ossifying. The technical issue is both the pace and the
process of designing regulations. Financial regulations, especially global
rules, change slowly. Basel I took effect in 1992. The Basel II’s design
process started in the mid-1990s, and it was partially implemented only
in 2007. As for the postcrisis regulations, Basel III has been in discussion
since 2008. One and a half decades pass between each iteration of the
global banking regulations. Throughout the process the regulators are
bombarded by lobbying from industry and governments. They have to
be seen as fair and transparent. The rules need to be as technically un-
demanding as possible.
Even if the designers of financial regulations manage to pick the very
best riskometer today, it will be put into practice years in the future and
used for decades thereafter. Over that time the riskometer will become in-
creasingly out of date—ossified—capturing irrelevant risks and neglecting
important ones, giving banks ample scope for manipulation. Supervisory-
mandated riskometers are much more likely to stagnate than riskometers
developed internally in a competitive environment, Kaizen style.
When I read the Basel committee’s and the European Banking Author-
ity’s reports on risk, I wrote a blog piece on the implications, “Towards a
More Procyclical Financial System,” arguing that it was inevitable for the
The definition of the term “trilemma” is that when faced with three
choices we can only pick two. When it comes to managing a risky financial
system, we have three options: uniformity, efficiency, or stability. Which
do we pick? So far, all the incumbent powers in the financial system, the
private institutions, and the regulators favor uniformity and efficiency.
The commercial institutions because it reduces competition and makes
money for them, and the regulators because it makes their job easier.
There are two lessons one can take from this: either we need better tech-
nology or we do something different.
205
I walked away intrigued but skeptical. The next thing I did was to sit
down and investigate whether my interlocutor was right. I ended up writ-
ing a series of articles on the subject.
We might think the financial system is the ideal application for AI.
After all, it generates almost infinite amounts of data, plenty for AI to
train on. Every minute decision is recorded, trades are stamped to the
microsecond. Emails, messages, and phone calls are recorded. But data
does not equal information, and making sense of all this data flow is like
drinking from a fire hose. Even worse, the information about the next
crisis event might not even be in this sea of data. I’ve concluded that the
speaker was mostly wrong.
A lot has been written about AI, and there is no need to repeat
that here. I highly recommend Stuart Russel’s book Human Compatible.
But I need to establish where I’m coming from, so bear with me. The
idea behind AI is that a computer learns about the world so it can make
decisions by itself. It can be something as simple as playing a game, as
complex as driving cars, and it can even regulate the financial system.
Describing AI is not straightforward; not even the experts agree. Let’s
start with machine learning, a computer algorithm that uses available data
to learn about the world that created that data. The algorithm studies all
the patterns and complicated causal relationships. What is magical is that
it can do so without human intervention: unsupervised learning. That is
different from the way we usually do science, where we start with some
idea of how the world might work—a theory—and see if data is compat-
ible with that theory.
A supermarket might use machine learning to figure out where best to
place Coca-Cola cans to maximize sales. The data scientist takes all the
historical observations on Coca-Cola sales, the weather, and demograph-
ics. She runs the data through her machine-learning algorithm and then
tells the supermarket where best to place the cans on the shelf. That is a
lot of data. For a chain like Walmart it can easily be quadrillions of ob-
servations. Precisely what is called big data. The critical thing is that the
machine-learning algorithm doing this doesn’t need to know anything
about Coca-Cola or supermarkets—it just gets data and finds the patterns.
There is no such thing as a free lunch, and there is a trade-off. Machine
learning needs a lot of data, much more than most other statistical appli-
cations. It needs big data. The reason is that it knows nothing about the
world and so has to learn everything from the data. Human beings know
the world and bring prior information—cultural, economic, historical,
and so on—to bear on a problem and thus need a lot less data. They
know theory, allowing traditional statistics to work with small datasets.
While machine learning is all about extracting information from a data
set, the objective of AI is to make decisions based on that data. AI is used
to make a lot of decisions today. In the 2000s British comedy show Little
Britain, a recurring sketch featured a bank loan officer named Carol Beer,
who responded to every customer query by typing into her computer and
then answering, “The computer says no,” even to the most reasonable of
requests.
The term AI is a bit of a misnomer, though. The AI of today isn’t in-
telligent in the sense a human being is intelligent. It merely knows a lot
about what-if rules. If the traffic light flashes red, stop. If it flashes green,
look at the traffic and then go. AI replicates the human brain, sort of. The
average human brain has eighty-six billion neurons, all interconnected by
synapses to form neural networks. A computer with a large enough num-
ber of artificial neurons wired in the same way could become intelligent—
in theory. We haven’t reached that stage yet. We are still competing with
the average insect. Take cockroaches. They are among the smartest of in-
sects and can learn and adapt to their environment, not to mention being
the only animal that supposedly will survive a nuclear war. They are not
the most social of animals but do exhibit complex social behavior. AI has
not caught up with cockroaches. In the words of the theoretical physicist
Michio Kaku, “At the present time, our most advanced robots . . . have
the collective intelligence and wisdom of a cockroach; a mentally chal-
lenged cockroach; a lobotomized, mentally challenged cockroach.”1
Moore’s law, named after Gordon Moore, the cofounder of Intel,
observed that the number of transistors in a microchip doubles every
eighteen months. Will that help? That was exactly what the technology
enthusiast I debated with argued. When I did my PhD, I ran my com-
puter code on a $27 million Cray Y-MP supercomputer. Time on the
Cray was carefully rationed, and the computing jobs of an economics
PhD student did not get first priority. I learned that the queue of jobs
tended to finish early over the weekend, so by waking up at four on Sun-
day mornings and going to the office I could have the Cray all to myself.
I did this way too often. When I was in Paris recently, I saw a Cray Y-MP
in the Science Museum. The iPhone in my pocket is many times faster.
The speed of computation has increased exponentially since before I was
born, and it shows little sign of slowing down.
Will Moore’s law help AI catch up with human intelligence? The short
answer is no. Moore’s law is about the speed of computations, and the
deep snow, where all the signs are hidden. Perhaps the worst challenge for
a car-driving AI is human beings. We are unpredictable and can behave
in a way that upsets self-driving cars. In response, some AI designers have
proposed reprogramming humans.3
The more we move away from the confined space of board games and
driving on the highway, the worse AI performs. It is not good at play-
ing games in which information is incomplete and the action space ill-
defined. I suspect it would not do well in my favorite game, Diplomacy. I
once played Diplomacy over the internet with a few friends, one of whom
is a politician. He beat us hands down, showing deviousness and strategy
that none of the other players had. I find it hard to believe AI will beat
him any time soon. It is especially challenging for AI if the rules evolve
during play, as happens in most human endeavors.
I have my own personal test for robots and artificial intelligence. Today,
I can pay someone who has never been to my house $100 to do my laun-
dry. He comes to my university office, and I give him money, keys, and
my home address. He finds his way to my house, gets in, finds my laundry
baskets and washing machine, figures out how to operate it and where
to find the detergent. He manages to do the laundry and put it into my
closet and cupboards, and then he slides my keys through the mail slot
on my door when done. All without any explanation or instruction. The
technology involved is older than I am. When I find AI that can do that,
I will be impressed.
that other major financial centers also develop their version of BoB, like
Fran and Edith, and all AI are friendly and cooperating with each other.
The financial institutions will also have their AI: Gus, Mary, and Betty.
Is this a pie-in-the-sky futuristic vision doomed to failure, like the flying
cars of the 1970s? No. While BoB and friends don’t exist yet, the technol-
ogy to create them is already here. Well, most of it. We just lack the will.
Roboregulators already exist in microprudential regulatory agencies.
The latest buzzword is RegTech, short for regulation technology. Its
chief cheerleader, the UK’s Financial Conduct Authority (FCA), defines
RegTech as the “adoption of new technologies to facilitate the delivery
of regulatory requirements.” I was involved with RegTech research over
the past few years because of a joint research program conducted by the
FCA, my research center, and other interested parties.
The starting point is the rulebook, all the rules and regulations which,
if printed out, create a stack of paper two meters high. The FCA has
translated the rulebook into an AI engine so it can check it for incon-
sistencies and give faster and better advice. Financial institutions corre-
sponding with the FCA bot find that it answers much better than its
human colleagues. AI is also revolutionizing risk management in banks.
The first step in creating a risk management AI is to develop and manage
riskometers, an easy task for AI. It quickly learns all the approved models,
the data is readily accessible, and it is easy for it to create riskometers. A
lot of financial institutions have AI engines today that do precisely that.
The AI then needs to learn about all a bank’s investments, the individuals
who made them, and, voilà, we have a functioning risk management AI.
The necessary information is already inside banks’ information technol-
ogy infrastructure, and there are no insurmountable technological hur-
dles along the way. And if there are, just use Aladdin or RiskMetrics. The
cost savings will be enormous. The bank can replace most risk modelers,
risk managers, and compliance officers with AI. The technology is already
here. All that remains to be done is to inform the AI of a bank’s high-level
objectives. The machine can then automatically manage risk, recommend
who gets fired or gets a bonus, and advise on how to invest.
Risk management and microprudential supervision are the ideal uses
for AI—they enforce compliance with clearly defined rules and processes
based on vast amounts of structured data. They have access to closely
The ability to successfully scan the financial system for systemic risk
hinges on where the vulnerability lies. Everyday factors, well founded in
economic theory, drive financial crises. Yet the underlying details are usu-
ally unique to each event. After each crisis, regulators and financial insti-
tutions learn, adapt processes, and tend not to repeat the same mistakes.
I suspect that BoB will focus on the least important types of risk, the
exogenous risk that is readily measured while missing the more dangerous
endogenous risk. It will automate and reinforce the adoption of mistaken
assumptions that are already a central element of current crises. In doing
so, it will make the resulting complacency even stronger. In other words,
BoB will be what the Soviet central planner Hayek warned us against in
his article “The Use of Knowledge in Society.” The problem is one of
how information aggregates, and BoB will have all the individual pieces
but will not know how to connect the dots. BoB measures all the minute
details of how a financial institution operates so he can understand all the
risks. He then aggregates the results to quantify not only the risk of an
individual institution but also systemic risk. Quadrillions of bits of tiny
risks that end up as simple summary measures. The risk-weighted as-
sets of a bank and the European Central Bank’s systemic-risk dashboard.
How well does that work? About as well as it did for Gosplan, the State
Planning Committee of the USSR: almost all the relevant information is
lost. Okay, you may retort. The current human-centered setup is no bet-
ter. Nothing in this tells us AI makes it worse, and it could certainly do
better because it will be much better at solving the aggregation problem.
Perhaps, if it were not for trust.
I have presented the work discussed here many times, and the most
frequent pushback I get is on the question of trust: “Jón, I may buy
your conclusions, but it doesn’t matter because we will never put AI in
charge of anything important.” I disagree because trust has a sneaky way
of creeping up on us. Twenty years ago almost nobody trusted internet
banking. Then, seeing that it works, almost everybody today uses online
banking. Very few people would have trusted computers to fly aircraft or
drive cars a quarter of a century ago, and now we mostly do. We are happy
for AI to control surgical robots. AI is proving its worth in critical day-to-
day applications, and that creates trust. The more we see AI outperform-
ing human beings, the more we prefer AI decision makers to humans.
The guardians of financial stability, the central banks, are already us-
ing AI. I recently gave a talk on AI at a central bank, where the audience
assured me AI made no decisions. Well, that day I could not easily enter
the central bank because the AI controlling the security system didn’t like
me. When I told that to the audience, they responded by saying, “We
meant AI controls nothing important.” They got it wrong. The central
banks already employ AI in a variety of places, doing a good job today,
building trust. And there are plenty of cost savings to be had. The staff
might not like it, but the board sees the benefits of replacing an army of
PhD-level economists with computers.
And herein lies the problem. In the 1980s AI called EURISKO used a
cute trick to defeat all of its human competitors in a naval wargame: it
sank its slowest ships to achieve better maneuverability than its human
competitors. An example of what is known as reward hacking, something
human beings are, of course, expert in.5 EURISKO’s creator, Douglas
Lenat, notes that “what EURISKO found were not fundamental rules
for fleet and ship design; rather, it uncovered anomalies, fortuitous inter-
actions among rules, unrealistic loopholes that hadn’t been foreseen by
the designers of the TCS simulation system.” Each of EURISKO’s three
successive victories resulted in rules changes intended to prevent a repeat.
The only thing that proved effective in the end was to disinvite Lenat and
his AI.
And that is the problem with AI. How do we know it will do the
right thing? Human admirals don’t have to be told they can’t sink their
own ships. They just know. AI has to be told. But the world is complex,
and it is impossible to create a rule covering every eventuality. BoB will
eventually run into cases in which he makes critical decisions in a way no
human would. And that is where humans have the advantage over AI. Of
course, human decision makers mess up more often than BoB. But there
is a crucial difference. The former also come with a lifetime of experience
and knowledge of relevant fields, like philosophy, history, and ethics, al-
lowing them to react to unforeseen circumstances and make decisions
subject to political and ethical standards without it being necessary to
spell them out.
Before putting humans in charge, we can ask them how they would
make decisions in hypothetical scenarios and, crucially, ask them to justify
of a better phrase, I will call these people hostile agents, those intent on
exploiting the system for private gain by optimizing against the system.
People do this all the time, and there is nothing uniquely AI about it. But
AI has inherent weaknesses that make optimization against the system
particularly dangerous.
The starting point is that the very fact we exercise control changes
the system—exactly why the epigraph above is so prescient. The world
looks different with and without the control. In most applications, opti-
mization against the system is not very important. Human drivers exploit
self-driving cars for their own advantage. They know the self-driving car
follows the rules and drives conservatively and predictably. It is easy for
the human to gain an advantage when it comes to merging traffic, at four-
way stops, and in any situation in which drivers are competing. However,
such optimization against the system is quite limited. The cost of failure
is local and small, and, most important, the drivers do not influence the
rules of the game.
And that is the crucial difference between most AI applications and the
regulation of the financial system. While human drivers taking advantage
of a Tesla on the freeway cannot build new roads to disadvantage the
Tesla or change the traffic rules or even move signs around, their counter-
parts in the financial system can do all of that. Because the rules and the
structure of the financial system are mutable, the financial system is a par-
ticularly fertile ground for optimization against the system. There is a lot
of money to be made, the adversaries are smart and well resourced, the
system is infinitely complex, and there are plenty of places to misbehave.
Ultimately, the financial system’s controllers are forever doomed to be on
the losing end of a cat and mouse game.
The complexity of finance offers many ways for economic agents to
bypass regulations, perhaps by creating new financial instruments that
mimic controlled instruments but are regulated differently, the driver of
much financial innovation. A good example is high-frequency trading.
The SEC in the United States has repeatedly changed the rules, but the
high-frequency trading firms are still quite able to exploit regular inves-
tors, as noted in Michael Lewis’s book Flash Boys.
Most hostile agents don’t deliberately break the law. Most, but not all.
Rogue traders have always been around. Take Nick Leeson and Kweku
Adoboli. Leeson was trading futures contracts for the Barings bank in
Singapore and exploited the “error account,” numbered 88888—mean-
ing super lucky in Chinese. It was supposed to be used to correct mistakes
in trading, but Leeson used it to hide his trading losses, which amounted
to $1.4 billion. Adoboli, trading on behalf of UBS, was making unau-
thorized trades, entering false information into the bank’s computers to
hide his actions. His eventual losses amounted to $2 billion. Leeson and
Adoboli were just individuals who illegally manipulated the control sys-
tems, and, though materially important to their employer, their impact
on society was negligible.
A more insidious example of optimizing against the system is when an
entire bank or a group of financial institutions join forces in destabilizing
the system. There may be nothing illegal about what they are doing. It is
quite possible they don’t even know that they are destabilizing, and those
in charge may be blissfully unaware of the consequences of what is going
on. A good example is all the dangerous financial instruments created be-
fore the crisis in 2008. Nobody had an overview of all the CDOs and con-
duits and all the other nefarious instruments that proved so damaging.
They were all optimizing against the system in their own little parts of it,
where everything looked okay. It was damaging only in the aggregate. We
don’t need a single big entity. The consequences are just as serious if op-
timization against the system involves many small, hostile agents. Their
profit may be maximized if they coordinate their hostile actions. Acting
as a wolf pack, sharing information, outwitting the controllers.
Even worse is a hostile agent intent on causing damage, a terrorist or a
rogue nation. It is even harder to detect such agents because they don’t
play by the standard rules. The rogue trader is motivated by profit and a
desire not to be caught, limiting what he gets up to. If someone doesn’t
care about making a profit, it is much easier to cause damage. Can AI
solve these problems? After all, BoB can patrol the financial system much
more extensively than any human being can and use machine learning to
find all the hidden connections. He knows everything that has happened
and is a much better enforcer than humans. The microprudential AI will
have the advantage over hostile agents. The stakes are small. There is
ample information about repeated events, plenty of data for the AI to
train on and learn all the hostiles’ nefarious tricks. And that is why AI
defenses. It is hard to randomize responses when the rules are simple and
crystal clear. And the rules don’t change very often. The Basel rules are
updated every couple of decades or so. Even at the local level, the relevant
laws have to be passed by Parliament and then implemented by the super-
visory agency. A process that is slow, transparent, with plenty of lobbying.
When someone hacks financial regulations, the regulator cannot respond
within hours or days. It might have to wait decades.
Then we have the silos. When the Icelandic banks failed in 2008, I
talked to a senior European regulator about why the problems and misbe-
havior were not detected. He said, “Simple. The banks did not misbehave
in any single jurisdiction. It was only in aggregate that they were causing
serious damage.” While the European authorities have now plugged that
particular loophole, the problem remains. Even individual countries have
multiple regulators, controlling ill-defined and ever-changing domains.
When the German bank Wirecard failed due to fraud in 2020, the rel-
evant regulator, Bafin, disclaimed any responsibility because it said it had
decided it did not need to regulate Wirecard. Apparently nobody had
responsibility for one of the largest German banks. The regulators vigor-
ously patrol the boundaries of their domains. It’s even worse internation-
ally, where regulators are jealous of their data and powers, refusing to
share and cooperate across borders. All helping the hostile agents to oper-
ate across jurisdictions without hindrance and frustrating AI. Ultimately,
the AI engine’s innate rationality, coupled with demands for transparency
and fair play, puts it at a disadvantage over human regulators.
BoB might still have a fighting chance against the hostile agents if the
financial system’s structure remained static. If the system never changes,
BoB learns more about the system every day and one day might be able
do a perfect job. The problem is that the financial system isn’t static. Not
only is it infinitely complex for all practical purposes, it is endogenously
infinitely complex, meaning the complexity is continually evolving in re-
sponse to what everybody in the system is up to. The worst outcomes
happen when seemingly unconnected parts of the financial system reveal
previously hidden connections. The vulnerabilities spread and amplify
through opaque channels, the dark areas nobody thought to be worried
about. Competition makes the system adversarial, and any rules aiming
to contain risk-taking become obstacles to be overcome.
The hostiles take full advantage. They are like cockroaches, scurrying
out of the lit areas the authorities monitor into the shadows, and in an
infinitely complex system find plenty of dark areas to feed on. The hos-
tiles, like everybody working in the financial system, have an incentive to
increase its complexity in a way that is very hard to detect. There are many
ways to do so, perhaps by creating new types of financial instruments
that have the potential to amplify risk across apparently distinct parts of
the system.
All of these factors frustrate BoB’s mission, but the ultimate problem
is that BoB isn’t smart enough to ensure financial stability and never will
be. His computational problem is harder than that of his adversaries. BoB
has to patrol the entire financial system, searching for the hidden corners
inhabited by the hostile agents where instability thrives. The hostiles have
to find only one vulnerability, one dark corner where they can feed. And
that is a much simpler problem. The hostiles always have a computational
advantage over BoB. Moore’s law or technological developments do not
help. The more we use AI in the financial system, the more the advantage
tilts toward the hostile agents.
The past informs the tools, but the threats come from the future.
223
government stays away from finance, or the socialists, who want tough
regulations to bend it to the will of society. And then there is the way for-
ward I will propose in chapter 14, putting the inherent stabilizing forces
of the financial system to good use.
jective and overcoming all the objections from the special interests? Start
with learning from history, the last financial crisis, 2008, with a detour
to Voltaire. In his book Candide, ou l’Optimisme (1759), Voltaire tells
the story of a young man, Candide, who lives a sheltered life in a sup-
posed paradise on earth. His mentor is Professor Pangloss, who likes to
proclaim “All is for the best in the best of all possible worlds.” We get
from him the word “Panglossian,” an excessively optimistic view of the
world.
The global crisis in 2008 should not have happened. The years before
2008 were the great moderation, evoking the 1920’s permanent era of
stability. The financial engineers had tamed risk. Yes, there were losses
here and there, even large ones like Enron or WorldCom, and developing
countries did have their crises, like Korea in 1998. But even then it was
the accountants and lawyers and politicians who failed, not the financial
engineers. Crises were a thing of the past, at least in the developed world.
Even in the developing world crises happened only when the good advice
of the IMF was rejected. By 2007 we were so safe that the IMF itself was
on the verge of being drastically downsized because there were no crises
and nothing for it to do.
It was the best of all possible worlds. The rules and institutional struc-
ture, founded on the rigor of scientific risk management, protected us.
The riskometers told us we had never been as safe as in 2007. For good
reasons. We have used statistical analysis to manage risk ever since Blaise
Pascal in the sixteenth century. The three waves of Kaizen took financial
engineering to the pinnacle of its success. The financial industry collects
vast amounts of data, which it processes with sophisticated models on
superfast computers, all overseen by highly educated financial engineers,
graduates of the world’s best universities. How can that not make the
financial system do what we want? Elementary. While the financial en-
gineers focus on all the micro risks—the grains of sand being examined
above (Figure 38)—new and dangerous forms of risk emerge where no-
body is looking—the endogenous risk monster that had been hiding all
along. Why did we miss it? Because the financial system is infinitely com-
plex, the financial engineers can only patrol a tiny part of it, while the
monster hides where they are not looking. The Panglossian great mod-
eration, meanwhile, encouraged even more risk-taking, à la Minsky. And
we did not know because the riskometers measure only exogenous risk,
not endogenous risk.
Then—boom!—2008 happened. Northern Rock, Bear Stearns, AIG,
and finally Lehman. We had a full-blown global crisis on our hands. The
exogenous risk indicators shot up. The authorities reacted quickly, ini-
tially with improvised responses and eventually the doctrine of macropru.
Are we protected? I don’t think so. Regulations are backward looking,
aiming to prevent the mistakes of the past. The rules that would have pre-
vented a repeat of 2008 ossify, while the world continually changes. The
market participants become increasingly good at evading the rules—new
risks get taken where nobody is looking, and the cycle repeats (Figure 39).
When you start hearing the world described in terms of a permanent era
of stability or the great moderation or growth and safety being ensured
by financial engineers or your country’s prosperity being due to cultural
Cognitive Failures
The CIA had a lot of information on Al Qaeda in early 2001 and
much forewarning of the 9/11 attack. In his book Rebel Ideas: The Power
of Diverse Thinking, Matthew Syed clearly illustrates the cognitive failures
that led the CIA to dismiss the threat. The CIA staff was predominantly
upper-middle-class, white, Ivy-League-educated men, and when all the
intelligence on Al Qaeda was filtered through their cultural lens an attack
was inconceivable. Like the CIA in 2001, so many fund managers, bank-
ers, and regulators have enormous power and resources. What frustrates
their mission is four cognitive failures that blind them to the threats and
opportunities. They make facts fit their biases, and when they don’t, find
other facts.
The first cognitive failure is a fallacy of composition, whereby we infer
that something must be true if all, or even some, parts of it are true: “Hy-
drogen (H) is not wet. Oxygen (O) is not wet. Therefore, water (H2O)
is not wet.” The fallacy of composition in financial regulations is that if
all the individual micro risks are kept under control, the entire financial
system is safe, implying that the financial system is the simple sum of all
the individual activities within it, so all the scientist has to do is study the
grains of sand on the beach (see Figure 38).
Suppose each and every bank is prudent—they are all Volvos, as I called
them earlier. They all do what they are supposed to do, with none taking
crazy risks. Still, they have to make risky investments like mortgages and
loans to small- and medium-sized enterprises; otherwise, they would not
be a bank. Some of those risky investments have a market price, but the
value of the majority can be determined only by a model, while risk in all
is measured by riskometers. The fallacy of composition means that even if
all the banks are prudent, the system is not safe.
Blame it on shock absorption, or rather the inability of prudent banks
to absorb shocks. Suppose some external shock arrives—Brexit or a tsu-
nami or Ukraine or Trump or an earthquake or China or Covid-19, or
just anything—so that the price of some assets held by the Volvo banks
falls. The consequence is that exogenous risk shoots up, inevitable by de-
sign since the riskometers are based on short-term historical fluctuations.
What is our Volvo supposed to do? Dispose of its riskiest assets and hoard
liquidity, of course, because that is prudent. But such selling will just
make the prices fall more, which makes exogenous risk increase further.
Cue more selling and a vicious feedback between falling prices, evapora-
tion of liquidity, and higher risk. If there is no buyer (because all possible
buyers are prudent), it can end only in tears (crisis). Nobody does any-
thing wrong. It is like passing a hot potato from one person to another:
they all pass it on in order to not get burned by it (Figure 40). Financial
stability is not achieved by making all the financial institutions prudent.
The second cognitive failure is invariance, the view that the financial
system does not change when observed and controlled. After all, the
world of physics is invariant, so why not the financial system? There is,
though, a crucial difference: endogenous risk. Anybody who interacts
with the financial system changes it, whether as asset managers, salespeo-
ple, regulators, journalists, investment bankers, university professors, or
finance ministers. Some significantly, others not so much. But while their
impact can be tiny, it is never zero. That may not matter much, and most
of us can safely behave as if risk were exogenous. But for the guardians of
financial stability and those worried about tail risk, like pension funds, the
Some of the most powerful and destructive forces in the financial system
not only lie on the boundaries of the silos but also actively exploit them.
The most damaging consequence of the cognitive failures is short
termism—the dissonance of the short and long run. Almost every eco-
nomic outcome we care about is long term. Pensions, the environment,
house prices, education, you name it, all are about what happens years
and decades hence. The short run isn’t all that important. Day-to-day
fluctuations in stock prices or real estate values or loan portfolios or inter-
est rates don’t matter much to most of us. So, does the way we measure
and manage financial risk reflect the importance of the long run? By and
large, no. We proclaim we care about the long term but actually just end
up managing short-term risk.
The reason is simple. It is really hard to measure long-term risk because,
after all, extreme infrequent events are, by definition, very scarce. The
problem of measuring long-term risk is entwined with that of standard
risk management practices. Consider a sovereign wealth fund that cares
about very long-term risk, decades into the future, where such a time
perspective is written into its laws and mandates. However, the fund is
monitored quarterly, and if it performs poorly over a few quarters, ques-
tions are raised, bonuses may not be granted, the head of the fund could
be summoned to appear in front of a parliamentary committee, and some
people may be fired. That makes the managers of the sovereign wealth
fund care about quarterly, not the decennial, performance regardless of
what the legal mandate says.
And the implications are unfortunate. If the short-term risk dashboards
are reassuring, as they were in 2006, we may easily take on undesirable
levels of risk, oblivious to the dangers. The impact on financial markets
will be lower volatility and fatter tails—day-to-day fluctuations become
smaller, while the chance of catastrophic long-term outcomes increases.
There are real world consequences to all of that. Economic growth
suffers. Growth has been slowing in the developed world over the past
few decades, a phenomenon called secular decline (Figure 42).1 “Hold
on, Jón,” I suspect many of you will think when reading this. “There are
many causes of secular decline, it is poorly understood, and to blame the
cognitive failures is a bit far-fetched.” Fair enough, they are not the only
cause, but do make it worse. For the economy to grow, we need risk:
The Covid-19 crisis in 2020 and the global crisis in 2008 are a
lens for looking at how the financial authorities think the financial system
should be controlled. While there is no single term or phrase that cap-
tures how they see their mission, I would like to propose “the modern
When I asked some friends working for the regulatory agencies what
they thought of my definition, they all agreed with the first two parts.
The third is more controversial and not how many people working for
the macroprudential authorities see it. (The microprudential regulators
would agree with my third part.) But my interlocutors agreed that was
only words and personal opinions. Actions were consistent with all three.
One of them added that the annual Risk Monitoring Exercise epitomized
the problem.
Covid-19 nicely demonstrates the modern philosophy in practice. The
initial Covid-19 shock was purely exogenous, so the virus was akin to
Archduke Ferdinand’s assassination, which set in motion the systemic
crisis of 1914. What happened after the virus hit was the typical endoge-
nous response. The epidemiological, social, and economic outcomes were
all the result of the interaction of the human beings who make all the
decisions—endogenous risk.
Covid did not cause a financial crisis. At best, we suffered turbulence.
Why? There are two schools of thought. The authorities maintain it is
thanks to both the post-2008 regulations leaving the financial system
highly resilient and also the swift policy responses in March and April
2020 (system bailout). In the words of the head of the Basel commit-
tee, Pablo Hernández de Cos, in 2021, “The global banking system has
remained broadly resilient. . . . The initial Basel III reforms, alongside an
unprecedented range of public support measures, are the main explana-
tions for this outcome.” Since the initial shock was purely exogenous, I
suspect the financial system would have had little trouble absorbing the
virus shock even without the bailout and Basel III. The FSB, with input
from all the leading financial authorities, published a report in November
2020 on what went wrong, how they responded, and lessons learned. The
document is called the Holistic Review of the March Market Turmoil and
sheds light on how the policy authorities see their mission:
In other words:
Left unsaid, but implied, is that the template for the regulations of the
NBFIs is bank regulations. So did it all work as well as the FSB suggests in
its Holistic Review? My collaborators and I have been looking at that very
question. Two days after the worst day in the equity markets in March
2020, we published a paper, “The Coronavirus Crisis Is No 2008,” on
how the Covid-19 crisis differed from the 2008 crisis. We concluded that
as the Covid-19 turmoil was not the same as that in 2008, a new policy
response was called for. In particular, liquidity injections would not be
very effective this time around.
Our subsequent research has continued to support these conclusions in
a paper titled “The Calming of Short-Term Market Fears and Its Long-
Term Consequences: The Central Banks’ Dilemma.” We took advantage
of a unique data set on option markets that allowed us to identify how
the financial markets’ fear of large losses changed owing to all the central
bank interventions.3 As the data is rich in time and space, spanning a large
number of stocks and countries and maturities from one week all the way
up to thirty years into the future, we have a comprehensive picture of how
the markets reacted to the Covid interventions.
The primary objective of the policy interventions in March and April
2020 was the immediate calming of market fear, but only in the short run.
Long-term fear would ideally not fall since that signals moral hazard. But
that is precisely what happened. The impact on long-term market fear,
even ten years into the future, was larger than at the short term. The les-
son the financial markets took from the interventions was that the central
banks stand ready to do what it takes. The market will be less vigilant and
take on more risk—moral hazard.
I can demonstrate that in more detail by taking one example out of our
paper: the Fed relaxed bank capital requirements under which the world’s
largest bank, JP Morgan, works. Fear in JPM’s stock price fell significantly
in response, both in the long and the short run. As the lowering of bank
capital requirements is explicitly designed to make banks take more risk,
one would have expected long-term fear to increase. However, the mar-
kets took it the other way, long-term fear fell—moral hazard increased.
That result, and others in our paper, lead me to conclude that the mod-
ern philosophy is destabilizing. Why? History provides guidance. The
economic profession went off the rails in the 1950s and 1960s, after John
Maynard Keynes died and before the pathbreaking work of Robert Lu-
cas. The prevailing view in those lost years was that one could control
the economy with a static framework, what I have termed Excelonomics.
Achieve policy objectives—high growth, low unemployment, low infla-
tion—by tweaking the parameters of a static economic model. The finan-
cial authorities and the governments loved it, as this way of doing eco-
nomics made them feel powerful, even omnipotent. Except it didn’t work.
The Lucas critique showed us why. Expectations matter, and economic
agents react to policies in a way that undermines the policy objective. All
we got was the word of the 1970s “stagflation”—inflation and stagnation.
Expectations also matter when it comes to macroprudential policy.
Banks were strongly affected by the 2008 crisis, so the policy authorities
sharply increased the intensity of regulations and the required levels of
bank capital. When Covid came, it all seemed to work splendidly since
the banks were hardly affected by the virus. But the risk now had spilled
over to the shadow banking system. You see, the economic agents that
comprise the financial system do not take regulations lying down. They
react to them, changing the financial system in the process. Once the
regulations take effect, they apply to a system that no longer exists: the
Lucas critique.
The authorities now ask the right question: “What can we do to pre-
vent a repeat?” but come to the wrong answer: “More regulations and
more control.” Fair enough, that might be the political outcome you
desire, but there are consequences. The first is that diversity suffers since
the lesson learned is that the nonbank sector needs to be brought under
official control that is similar to that of the banking sector. Financial in-
stitutions will become more like each other and the financial system more
procyclical. Systemic risk will increase.
Furthermore, the financial authorities gain even more power. But that
power comes at a cost. To begin with, the more power a state agency
has, the more democratic oversight it needs. We can’t have unelected
bureaucrats making decisions of fundamental importance to society when
they have no direct democratic legitimacy. But even worse, it makes
the financial authorities even more responsible for stability. They have
all the information and the power, and hence get blamed when things
go pear-shaped. And then it is much harder to resist bailing out private
institutions.
Then we have the very high cost of the new regulations, and who pays
the cost? The banks’ clients—us. Not necessarily a big problem in the
United States, where only a third of financial intermediation comes via
the banks. But in Spain, the home country of the head of the Basel com-
mittee (Governor Pablo Hernández de Cos), 96 percent of company fi-
nancing is provided by the banks. Spain is not exactly doing well econom-
ically and might think that the worst policy would be to further increase
the cost and reduce the availability of company loans.
And finally the political consequences. Bailouts are dangerous things,
and nobody likes them except the recipients. They drive populism and
moral hazard and undermine the credibility of the state. If the financial
authorities can do no better than design a setup requiring a bailout every
decade, neither the authorities nor the governments that empower them
appear to be competent or honest. Fortunately for the authorities, all the
Covid-19 bailouts passed unseen, lost in all the virus hoopla, unlike what
occurred in 2008.
The ultimate consequence of the modern philosophy of financial regu-
lations is to make us question the private financial system. Why not bring
it under direct state control?
and cannot be trusted. Repeated crises with their bailouts and quantita-
tive easing mean it would be much better to replace the central banks
with algorithms—mathematics can be trusted. In steps bitcoin. It is a
complicated subject, meriting much more space than I have here, and
I have written extensively on it elsewhere.4 The specific form of money,
whether the fiat money we use today or cryptocurrencies, is not all that
important for the objectives of financial policy; what matters is the power
the financial authorities have and how they exercise it. So, cryptocurren-
cies are not the solution.
What, then, about their cousin, central-bank digital currencies, or
CBDCs. The idea is that the central banks create a new digital form of
the fiat money every country uses today. Perhaps as a token on a block-
chain. There was a lot of enthusiasm about CBDCs a few years ago. They
promised to solve so many of the problems we have with the financial
system, allowing targeted bailouts, the fine-tuning of the money supply,
and provision of ample information about what financial institutions are
up to, especially all the liquidity flows. The central bank governor can
then manage everything with their AI—BoB. Then the disadvantages be-
came clear. In their purest form, CBDCs mean the central bank controls
all the money in the economy because it controls the blockchain. Every
transaction is visible to it, so the central bank not only closely monitors
what citizens are up to but also makes all the loans. We don’t want that,
so today’s CBDC proposals aim merely to improve the payments system.
Here, the authorities are haunted by PayPal, which came out of nowhere
two decades ago, and by the time they woke up it was too late to do any-
thing about PayPal. So, the primary motivation for CBDCs today is to
forestall alternative payment systems, especially those under the control
of foreign companies. Certainly a worthwhile goal, but does nothing to
solve the problems I am discussing here.
If technology is not the solution, then what about politics? On the
libertarian wing of the political spectrum, the root of the problem is the
state, especially its regulations, bailouts, and currency mismanagement.
By forswearing regulations and bailouts, we will see crises, but not nearly
as many as now, since the reason we need bailouts is that the govern-
ment promises them. That is true, but the laissez-faire position makes
sense only in theory, not in practice, because in a democratic society we
Diversity stabilizes.
240
small price drop in 1987 and 2007, end of war in 1763, etc., etc. The trig-
gers are as varied as they are numerous. The key difference between the
triggers and the fundamentals is visibility—the triggers are simple and
for all to see, while the fundamentals are obscure. And that very visibility
leads us down the wrong path, a trigger that causes a crisis today may
whimper out into nothing tomorrow. It would be much better to ignore
the triggers and focus on the fundamental driver of crises and bad perfor-
mance—endogenous risk.
Figure 44. Monster under the bed. Credit: Copyright © Ricardo Galvão.
mitigate the worst. We were already in a crisis. And then, once something
bad happens, we like to learn the lessons. Figure out what went wrong so
it can never happen again—closing the barn door after the horse escapes.
That also brings false resilience. The forces of instability congregate in the
dark areas where no one is looking, so by its very definition, the danger
will emerge somewhere else next time.
The main driver of false resilience is that the risk we measure, all the
micro risks, tends not to be the type of risk we most care about. The rea-
son is the riskometer, the magical device that pops out measurements of
financial risk when plunged deep into the bowels of the financial system.
Financial regulations, risk control, and portfolio management depend on
the riskometer, more and more every day. Why? Because it is seen as
scientific and objective, helping decision makers collapse a complicated
problem into a small set of precise numbers on a risk dashboard.
And that is where things go wrong. The riskometer is not nearly as
scientific and objective as its proponents think. They have been waylaid
by precise scientific instruments, like the thermometer, which allows us
to measure temperature as accurately as we want, in real time. There is
only a single unambiguous notion of what temperature is, and it is easy
to use the thermometer to control temperature with real-time feedback
techniques. When the temperature is too high, turn down the thermo-
stats; that’s why the problem of keeping the risk manager’s office at a
steady 72°F or 22°C is easy. We can’t implement such feedback mecha-
nisms with most financial risk, even if plenty have tried to. Why? To begin
with, there is no uniform view of what is important and hence what risk
to target. Is it day-to-day volatility? Tail risk, the hefty price-drops that
cause sudden big losses, bankruptcies, and crises? Or the slow drip-drip-
drip movements of prices downward, with no significant fluctuations and
no discernible tail risk, but prices that only go south? Is it the chance of a
pension not delivering on that comfortable retirement fifty years hence?
The likelihood our country will suffer a systemic crisis next year?
Each concern calls for a different concept of risk—what is risk depends
on what we care about. It is unfortunate that the easiest risk to measure,
and hence the one most widely used, is short-term, day-to-day events—
volatility or its close cousins Value-at-Risk and Expected Shortfall. These
have little or nothing to say about tail risk or crises or the solvency of
your pension fund. Astonishingly, the very financial regulations and risk
management practices that are meant to keep banks safe, protect our
pensions, and prevent crises are so often based on nothing more than
day-to-day price fluctuations.
Even after picking a concept of risk, we are left with the problem of
measurement. There are dozens of competing techniques out there that
deliver widely different measurements of the same risk, with no clear way
to discriminate among them. All purporting to be state-of-the-art, and
each with its own groupies. And even then we have measured only the
risk in a single asset, perhaps a stock, a loan, or a derivative. The next
step is even harder, the aggregation of risk across time and space. How
to go from all the micro risks to the portfolio, department, bank, and
the system, today and years and decades into the future. The more we
aggregate risk, the less accurate the result is. There is a subtle point at
work here. While it is obviously true that systemic risk is the aggregate of
all the individual micro risks, that conceptual notion does not mean we
know how to do the calculations. It is a common problem in science. You
can know everything about a human being’s physiology and biology and
know nothing about them as a person. We can’t easily aggregate risk be-
cause of the complex interactions between all the individual risks. In real
life, outside of the universe of the risk modeler, the strongest connections
between risk factors manifest themselves only in times of extreme stress.
They are simply not seen otherwise.
Why? Liquidity is the most obvious reason. Liquidity is ample most of
the time, even seemingly infinite. But it is by and large not measurable
and has the annoying tendency of evaporating when most needed, in
times of stress. Becoming the common crisis factor that affects all assets
and liabilities, exposing all the hidden connections we never knew existed,
until its too late. If we measure risk in normal times, we underestimate
each asset’s risk and especially how it relates to other assets, because the
very factor that makes them strongly related—liquidity—is not visible.
So while it is easy to come up with a number for the aggregate risk of a
bank or a country or even the entire world, the calculations’ accuracy is
very low. I think many readers will disagree since it is standard practice in
finance to do precisely that. Yes, it is easy to come up with a number for
aggregate risk. It is not so straightforward to do it accurately. Those who
The policy authorities should clearly express what their objectives are,
why they are regulating, and what they aim to accomplish. High-level
statements are welcome, like those made by Chairman Powell, emphasiz-
ing low, steady inflation and high employment, but such clarity is mostly
absent in the actual documents that outline policy actions.
And straying firmly into controversy, unlike so many other commenta-
tors, I think a key purpose of central banks and other financial authori-
ties is to help the financial system operate at higher risk levels than it
otherwise would do. As a general rule, more risk means more economic
growth, and by helping the system to operate at higher levels of risk than
it otherwise could safely do, we all benefit.
the privilege of sitting next to the governor over dinner. We got to discuss-
ing the topic of silos, and I asked him about how central bank policy since
the crisis in 2008 affects inequality. He told me that on a personal level he
was deeply concerned about inequality, but from his central bank’s point
of view it was irrelevant because inequality was not in its legal mandate.
Policy makers optimize locally, not globally. Researchers of financial
stability are usually no better, even though they should be unencumbered
by the silos. I have heard a lot of conference presenters claim that “the
financial system is dangerous, I have identified the most important risks,
and this is how you measure and control them. If you follow my sugges-
tion, we meet our objective.” Derisking the very part of the system they
have spent years studying.
The silos can lead to strange outcomes. The Bank of England decided
in June 2017 to tighten capital constraints on commercial banks because
it thought they were taking on too much risk. That month it also opted
to keep interest rates low to encourage banks to make more risky loans to
small- and medium-sized enterprises in order to stimulate the economy.
These policy decisions are obviously contradictory. While I have no idea
how the bank came to this point, I can think of only two reasons: Either
it wanted to please two separate political interests and was cynical enough
to recognize that nobody would spot the contradiction. Or the decisions
were made by two distinct parts of the bank—the financial stability and
monetary policy divisions—and they didn’t coordinate with each other.
Overcoming the silo mentality requires that financial policy be done
in a more holistic way, and the only authority able to make that happen
is the government. It can mandate the necessary interagency and inter-
silo cooperation, force the various policy authorities to optimize globally
and not locally. I can hear the objections: “Government agencies should
focus on a single objective; if they have multiple mandates, one will lose
out.” “It will politicize the central banks.” “It will lead to muddled and
confusing policy-making.” “We will get inflation.” Every agency will find
a host of reasons for why this is both a horrible idea and practically im-
possible. Balderdash. It is certainly doable, and one country, Singapore,
leads the way.1 The Monetary Authority of Singapore collaborates closely
with both the Ministry of Finance on fiscal policies and the Ministry of
National Development (land supply policies). Singapore has been quite
The final principle for getting the best out of the financial sys-
tem is to embrace diversity, the most potent force of financial stability
and good investment performance. Suppose some shock hits the markets,
perhaps a virus like Covid-19 or the fight between the Reddit investors
and the short-selling hedge funds in January 2021. If I react to a shock
by buying and my buddy Ann by selling, our reactions cancel each other
out—together, we create countercyclical random noise. If, instead, we
both buy or sell, we procyclically amplify the price movements. We are
procyclical when we see and react to the world in the same way, and
countercyclical when we don’t. The reason why Baron Rothschild was
so prescient when he wrote two and half centuries ago that the best time
to buy property was in the middle of a Civil War—buying in crises stabi-
lizes. We need the Rothschilds, the Soroses and Buffetts, all the sovereign
wealth funds, someone who sees buying opportunities during turmoil.
For such individuals and entities to exist, they have to be unencumbered
and free to invest the way they see best.
The enemy of financial stability and good, stable, long-term invest-
ment returns is uniformity. The more similar financial institutions are, the
higher systemic risk becomes, because they will amplify the same shocks
and inflate the same bubbles. The Millennium Bridge wobbled because
the pedestrians on the bridge acted like a troop of soldiers, not like civil-
ians. And that is what always happens in times of turmoil. Market par-
ticipants become much more uniform in outlook and action than they
usually would be.
Not many disagree. The relationship between diversity and systemic
risk is well understood. But that is theory. Practice is different, and the
incentives of the banker and the regulator push toward uniformity. There
are good and bad reasons for this. The most apparent is best practices.
Nothing wrong with that. Nobody likes to use the second best. But there
is a problem when it comes to risk. Because best practices mean using the
one state-of-the-art riskometer and risk management technique, all will
see and react to risk in the same way—uniformity, not diversity.
Increasing returns to scale also erode diversity. Banking favors the large
because the fixed costs of financial services are so huge. The bigger the
banks become, the cheaper it is to service all the complex needs of their
large clients. Competitive forces drive mergers. While sometimes bemoan-
ing the falling number of banks, the financial authorities encourage it in
practice, happy to use mergers when resolving crises and failing banks.
Financial regulations further favor uniformity. There are two costs
a bank has to pay when complying with regulations, variable and fixed
costs—understanding how the regulatory apparatus works, knowing the
legal environment, and the like. While well meaning and generally useful,
there is a dark side. Because the fixed cost is substantial, the bigger the
bank, the cheaper it is, per unit of size, to comply—increasing returns to
scale (Figure 45).
The problem of antidiversity regulations is especially pernicious in Eu-
rope because the financial authorities there have to deal with transna-
tional regulations of national banks and the politics of managing unruly
countries on a European level. That means they need to be seen as pro-
viding a level playing field—why Basel III applies to all banks instead of
Figure 45. The fixed and variable cost of regulations. Credit: Lukas
Bischoff/IllustrationX.
only to the largest, which would be much more appropriate and what the
rest of the world wanted. The result is uniform rules that treat all banks,
large and small, in the same way. Since we need complex rules for the
largest, the cost of complying favors the large.
Bank-based financial systems also favor uniformity. In the United States
only about one-third of lending to companies is done by banks; the rest
happens via the bond markets and various other nonbank entities that
intermediate funds from savers to companies. In the rest of the world
about 90 percent of credit comes from banks, over 80 percent in the
United Kingdom, 92 percent in Germany, and 96 percent in Spain. And
even then the typical bank in the United States is much smaller relative to
the size of the economy than banks in other countries. There are about
4,400 banks in the United States and only 200 in Japan, even though the
Japanese economy is about half the size of the U.S. economy. The large
number of banks in the United States plus their small share of overall
credit make the US way of funding companies much more resilient, and
much easier for the country to whip its banks into shape after a crisis, as
it did following 2008. Bank-based financial systems provide less financing
for innovative and risky companies, the cost of finance is high, and it is
very hard to regulate the banks without imposing significant economic
costs. The stillborn European Capital Markets Union was meant to help,
but the power of the incumbent interests was just too great.
The financial authorities face a thorny way forward. They have to juggle
a lot of issues and are subject to ferocious lobbying. But they should em-
brace diversity. Actively encourage new financial institutions, especially
those whose business models are different from everybody else’s. Regu-
late financial institutions for consumer protection, but don’t control all
the micro risks. Unfortunately, the financial authorities are not keen on
diversity. Their rules actively get in the way of start-ups and new busi-
ness models. The start-ups have to comply with regulations suited for
the largest banks and, before getting licenses to practice, having to set up
the myriad of functions that make up the modern financial institution;
like the board, capital, management, IT system, and compliance. A slow,
cumbersome, and very expensive process. And at the end of the day there
is no guarantee they will get a license. The reason the regulatory pro-
cess is so antidiversity and anti-start-up is that the regulators are worried
about making a mistake; they are like the Chinese air traffic controllers
I discussed earlier. Focused on risk to themselves, not on the benefits to
society. That risk aversion means the licensing process favors uniformity
and the incumbents.
What is lacking is risk culture. The financial authorities could do well
by learning from their counterparts in other fields, like aviation. The air-
line industry is regulated with a view to simultaneously maximize the
benefit to society and keep risk under control, and we see the outcome.
The cost of flying is steadily falling while safety gets better every year. The
central banks and regulators need such risk culture.
The financial authorities should be made to explain how the job they
are doing benefits the rest of us. Telling us what the objective of what
they are doing is, outlining how they are achieving that goal, and how it
all fits in with what the other agencies are doing.
Diversify the regulators. If we put a single regulator in charge of every-
thing—the super regulator so common today—we end up with a gov-
ernment agency that prefers uniformity, one that shares the goals of the
incumbent interests and loathes what is different. We need competition
between regulators, so we get agencies that both regulate and defend
their part of the industry, protecting heterogeneity along the way.
While there are no silver bullets when it comes to the financial system,
some ways of controlling it are better than others. The worst thing is to
fight risk, the preferred option of the financial authorities. The financial
system is like the Hydra, and though the authorities can cut off as many
heads as they want, they will always grow back. It is much better not to
try the impossible, instead just taking advantage of the inherent forces of
stability and the five principles for how to deal with the system.
The first is to recognize that the real danger is endogenous risk, not the
type of risk we usually measure. There are few fundamental reasons why
the financial system does not do what we want it to do, and that is where
our attention should be. It is all too easy to focus on the triggers of crises
and bad investment performance, as they are there for all to see while the
fundamental causes are hidden.
The second is to be aware of riskometers and false resilience. It is easy
to convince oneself all is okay, that we are fully hedged against the worst
tail events and comfortably safe since the financial authorities have sys-
2. Systemic Risk
1. Schnabel and Shin, “Liquidity and Contagion: The Crisis of 1763,” 929–68, and
Quinn and Roberds, “Responding to a Shadow Banking Crisis: The Lessons of 1763,”
1149–76.
2. International Monetary Fund, Bank for International Settlements, and Financial
Stability Board, Guidance to Assess the Systemic Importance of Financial Institutions,
Markets and Instruments: Initial Considerations, 2.
3. From Hans Christian Andersen’s nineteenth-century tale “The Emperor’s New
Clothes.”
4. Carville, television interview.
5. Black-Scholes refers to the statistical techniques for pricing options developed by
Fischer Black and Myron Scholes in 1973, earning them the Nobel Prize in 1997.
6. Black, “Hedging, Speculation, and Systemic Risk,” 6–8.
7. As quoted in Hoyt, The Cyclopedia of Practical Quotations.
8. Holder, Senate Judiciary Committee testimony.
3. Groundhog Day
1. Capra, It’s a Wonderful Life.
2. The primary source of historical banking crises is Reinhart and Rogoff, This Time
It’s Different. I combine that with the IMF crisis database, Laeven and Valencia, “Sys-
temic Banking Crises Revisited.”
3. Prince, interview in the Financial Times.
255
4. Martin, “Address before the New York Group of the Investment Bankers Asso-
ciation of America.”
5. As quoted in Day, A Wonderful Life: S&L HELL: The People and the Politics be-
hind the $1 Trillion Savings and Loan Scandal.
6. Ideas Matter
1. Keynes, The General Theory of Interest, Employment and Money.
2. Mises, The Ultimate Foundation of Economic Science.
3. Hayek, “The Use of Knowledge in Society.”
4. See Rosenzweig, “Robert S. McNamara and the Evolution of Modern
Management.”
5. Yankelovitch, Corporate Priorities: A Continuing Study of the New Demands on
Business.
6. Rumsfeld, U.S. Department of Defense (DoD) news briefing.
7. Josiah Stamp is recounting a story from Harold Cox, who quotes an anonymous
English judge.
8. Goodhart, “Risk, Uncertainty and Financial Stability”; Shackle, Keynesian
Kaleidics.
9. Buffett, “Berkshire Hathaway 2011 Letter to Shareholders.”
10. Minsky, “The Financial Instability Hypothesis: An Interpretation of Keynes and
an Alternative to ‘Standard’ Theory.”
11. Yellen, press conference.
12. Goodhart, “Public Lecture at the Reserve Bank of Australia.”
13. Lucas, “Econometric Policy Evaluation: A Critique,” in The Phillips Curve and
Labor Market.
7. Endogenous Risk
1. Keynes, The General Theory of Interest, Employment and Money.
2. Crockett, Marrying the Micro- and Macro-Prudential Dimensions of Financial
Stability.
3. The average return in excess of the risk-free rate divided by volatility.
4. For a lucid description of events, see Lowenstein, When Genius Failed: The Rise
and Fall of Long-Term Capital Management.
5. Ibid.
6. Ibid.
Adams, Douglas. The Hitchhiker’s Guide to the Galaxy. London: Pan Books, 1978.
Admati, Anat, and Martin Hellwig. The Bankers’ New Clothes: What’s Wrong with
Banking and What to Do about It. Princeton: Princeton University Press, 2014.
Akerlof, George A. “What They Were Thinking Then: The Consequences for Macro-
economics during the Past 60 Years.” Journal of Economic Perspectives 33, no. 4
(2019): 171–86.
Aliber, Robert Z., and Charles P. Kindleberger. Manias, Panics, and Crashes: A His-
tory of Financial Crises. New York: Palgrave Macmillan, 2015.
Bagehot, Walter. Lombard Street: A Description of the Money Market. London: H. S.
King, 1873.
Bank for International Settlements. Report on the Regulatory Consistency of Risk-
Weighted Assets for Market Risk. Basel: Bank for International Settlements, 2013.
Basel Committee on Banking Supervision. Amendment to the Capital Accord to Incor-
porate Market Risks. Basel: Basel Committee on Banking Supervision, 1996.
———. Fundamental Review of the Trading Book: A Revised Market Risk Framework.
Basel: Basel Committee on Banking Supervision, 2013.
BBC. “Did the Bank Wreck My Business?” Panorama, 2014.
Bernstein, Peter L. Against the Gods: The Remarkable Story of Risk. New York: John
Wiley, 1996.
Bevilacqua, Mattia, Lukas Brandl-Cheng, Jón Daníelsson, Lerby Ergun, Andreas
Uthemann, and Jean-Pierre Zigrand. “The Calming of Short-Term Market Fears
and Its Long-Term Consequences: The Central Banks’ Dilemma.” SSRN Elec-
tronic Journal, 2021.
Bitner, Richard. Confessions of a Subprime Lender: An Insider’s Tale of Greed, Fraud,
and Ignorance. New York: John Wiley, 2008.
Black, Fischer. “Hedging, Speculation, and Systemic Risk.” Journal of Derivatives 2
(1995): 6–8.
259
———, and Myron Scholes. “The Valuation of Option Contracts and a Test of Mar-
ket Efficiency.” Journal of Political Economy 27 (1973): 399–418.
Blanchard, Olivier. “(Nearly) Nothing to Fear but Fear Itself.” The Economist, 2009.
Borio, Claudio. “The Macroprudential Approach to Regulation and Supervision.”
VoxEU.org, 2009.
Box, George. “Science and Statistics.” Journal of the American Statistical Association
(1976): 791–99.
Buffett, Warren. “Berkshire Hathaway 2011 Letter to Shareholders.” 2011.
———. “Why Stocks Beat Gold and Bonds.” Fortune, 2012.
Calomiris, Charles W., and Stephen H. Haber. Fragile by Design: The Political Origins
of Banking Crises and Scarce Credit. Princeton: Princeton University Press, 2014.
Capra, Frank, dir. It’s a Wonderful Life. 1946.
Carney, Mark. “Ten Years On: Fixing the Fault Lines of the Global Financial Crisis.”
Banque de France Financial Stability Review, no. 21 (2017).
Carville, James. Television interview. 1992.
Chwieroth, Jeffrey M., and Jón Daníelsson. “Political Challenges of the Macropru-
dential Agenda.” VoxEU.org, 2013.
Crockett, Andrew. Marrying the Micro- and Macro-Prudential Dimensions of Finan-
cial Stability. Basel: BIS, 2000.
Daníelsson, Jón. “The Emperor Has No Clothes: Limits to Risk Modelling.” Journal
of Banking and Finance 26 (2002): 1273–96.
———. “The Myth of the Riskometer.” VoxEU.org, 2009.
———. Financial Risk Forecasting. New York: John Wiley, 2011.
———. “Risk and Crises.” VoxEU.org, 2011.
———. Global Financial Systems: Stability and Risk. London: Pearson, 2013.
———. “The New Market-Risk Regulations.” VoxEU.org, 2013.
———.“Towards a More Procyclical Financial System.” VoxEU.org, 2013.
———. “What the Swiss FX Shock Says about Risk Models.” VoxEU.org, 2015.
Daníelsson, Jón, Paul Embrechts, Charles A. E. Goodhart, Con Keating, Felix Muen-
nich, Olivier Renault, and Hyun Song Shin. An Academic Response to Basel II.
London: LSE Financial Markets Group, 2001.
Daníelsson, Jón, Kevin James, Marcela Valenzuela, and Ilknur Zer. “Model Risk of
Risk Models.” Journal of Financial Stability 23 (2016).
———. “Can We Prove a Bank Guilty of Creating Systemic Risk? A Minority Re-
port.” Journal of Money Credit and Banking 48 (2017).
Daníelsson, Jón, Frank de Jong, Roger Laeven, Christian Laux, Enrico Perotti, and
Mario Wuthrich. “A Prudential Regulatory Issue at the Heart of Solvency II.”
VoxEU.org, 2011.
Daníelsson, Jón, and Con Keating. “Valuing Insurers’ Liabilities during Crises: What
EU Policymakers Should Not Do.” VoxEU.org, 2011.
Daníelsson, Jón, Roger Laeven, Enrico Perotti, Mario Wuthrich, Rym Ayadi, and
Antoon Pelsser. “Countercyclical Regulation in Solvency II: Merits and Flaws.”
VoxEU.org, 2012.
Daníelsson, Jón, and Robert Macrae. “The Appropriate Use of Risk Models: Part I.”
VoxEU.org, 2011.
———. “The Appropriate Use of Risk Models: Part II.” VoxEU.org, 2011.
———. “The Fatal Flaw in Macropru: It Ignores Political Risk.” VoxEU.org, 2016.
———. “The Dissonance of the Short and Long Term.” VoxEU.org, 2019.
Daníelsson, Jón, Robert Macrae, Dimitri Tsomocos, and Jean-Pierre Zigrand. “Why
Macropru Can End Up Being Procyclical.” VoxEU.org, 2016.
Daníelsson, Jón, Robert Macrae, and Andreas Uthemann. “Artificial Intelligence and
Systemic Risk.” Journal of Banking and Finance (2021).
Daníelsson, Jón, Robert Macrae, Dimitri Vayanos, and Jean-Pierre Zigrand. “The
Coronavirus Crisis Is No 2008.” VoxEU.org, 2020.
Daníelsson, Jón, and Hyun Song Shin. “Endogenous Risk.” In Modern Risk Manage-
ment: A History. London: Risk Books, 2003.
Daníelsson, Jón, Hyun Shin, and Jean-Pierre Zigrand. “Endogenous Extreme Events
and the Dual Role of Prices.” Annual Reviews 4 (2012).
Daníelsson, Jón, Marcela Valenzuela, and Ilknur Zer. “Learning from History: Vola-
tility and Financial Crises.” Review of Financial Studies (2018).
Daníelsson, Jón, and Chen Zhou. “Why Risk Is So Hard to Measure.” Amsterdam:
De Nederlandsche Bank NV, 2016.
Daníelsson, Jón, and Jean-Pierre Zigrand. “Are Asset Managers Systemically Impor-
tant?” VoxEU.org, 2015.
Day, Kathleen. A Wonderful Life: S&L HELL: The People and the Politics behind the
$1 Trillion Savings and Loan Scandal. New York: W. W. Norton, 1993.
Diamond, Douglas W., and Philip H. Dybvig. “Bank Runs, Deposit Insurance, and
Liquidity.” Journal of Political Economy 91 (1983): 401–19.
Dunbar, Nicholas. “What JP Morgan’s Release of VaR Has in Common with Sex
and Computer Viruses.” 2012. http://www.nickdunbar.net/articles/what-jp
-morgans-release-of-var-has-in-common-with-sex-and-computer-viruses/.
———. “Value-at-Risk Inventor Longerstaey on the Perils of Oversimplification.”
Bloomberg Briefs, 2012.
Elliot, Geoffrey. Overend & Gurney, a Financial Scandal in Victorian London. Lon-
don: Methuen, 2006.
El-Naggar, Mona. “In Lieu of Money, Toyota Donates Efficiency to New York Char-
ity.” New York Times, 2013.
Engle, Robert. “Autoregressive Conditional Heteroskedasticity with Estimates of the
Variance of the United Kingdom Inflation.” Econometrica 50 (1982): 987–1007.
Engels, Friedrich. Socialism: Utopian and Scientific. London: Swan Sonnenschein, 1880.
European Banking Authority. “EBA Interim Report on the Consistency of Risk-
Weighted Assets in the Banking Book.” 2013.
Fama, Eugene. “Mandelbrot and the Stable Paretian Hypothesis.” Journal of Business
36, no. 4 (1963): 420–29.
———. “Are Markets Efficient?” Posted at https://review.chicagobooth.edu/
economics/2016/video/are-markets-efficient, 2016.
Financial Stability Board. Holistic Review of the March Market Turmoil. Financial Sta-
bility Board, 2020.
Fitzpatrick, Dan. “J.P. Morgan to SEC: That Model Change Doesn’t Count as
‘Change.’” Wall Street Journal, 2013.
Flitter, Emily. “Emails Show JP Morgan Tried to Flout Basel Rules—U.S. Senate.”
Reuters, 2013.
Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United
States: 1867–1960. Princeton: Princeton University Press, 1963.
Gissurarson, Hannes. Twenty-Four Conservative-Liberal Thinkers, Part II. Brussels:
New Direction, 2021.
Goodhart, Charles A. E. “Public Lecture at the Reserve Bank of Australia.” 1974.
———. Risk, Uncertainty and Financial Stability. London: Financial Markets Group,
London School of Economics, 2008.
———. The Basel Committee on Banking Supervision: A History of the Early Years
1974–1997. Cambridge: Cambridge University Press, 2011.
Greenspan, Alan. Discussion at Symposium: Maintaining Financial Stability in a
Global Economy, at the Federal Reserve Bank of Kansas City (1997): 54.
Haldane, Andy. “Managing Global Finance as a System. Maxwell Fry Annual Global
Finance Lecture,” Birmingham University, 2014.
Hayek, Friedrich von. “The Use of Knowledge in Society.” American Economic Re-
view 35, no. 4 (1945): 510–30.
Henry, David, and Lauren Tara LaCapra. “JP Morgan and Other Banks Tinker with
Risk Models.” Reuters, 2013.
Hernández de Cos, Pablo. “Basel III Implementation in the European Union.”
BCBS Speech, 2021.
Holder, Eric. US Senate Judiciary Committee testimony. 2013.
Honohan, Patrick, and Daniela Klingebiel. “The Fiscal Cost Implications of an Accom-
modating Approach to Banking Crises.” Journal of Banking and Finance 26 (2003).
House of Commons library. “Financial Services: Contribution to the U.K. Econ-
omy.” Briefing Paper Number 6193, 2017.
Hoyt, Jehiel Keeler. The Cyclopedia of Practical Quotations. London: Funk & Wag-
nalls, 1907.
International Monetary Fund, Bank for International Settlements, and Financial Sta-
bility Board. Report to G20 Finance Ministers and Governors. Guidance to Assess
the Systemic Importance of Financial Institutions, Markets and Instruments: Initial
Considerations (2009): 2.
Jansen, Dennis, and Casper G. de Vries. “On the Frequency of Large Stock Returns:
Putting Booms and Busts into Perspective.” Restat 73 (1991): 18–24.
Johnson, Rian, dir. Star Wars: Episode VIII—The Last Jedi. 2017.
Kahn, Jeremy. “To Get Ready for Robot Driving, Some Want to Reprogram Pedes-
trians.” Bloomberg, 2018.
Kaku, Michio. The Future of Quantum Computing. At https://www.youtube.com/
watch?v=YgFVzOksm4o (2011).
Smith, Adam. An Inquiry into the Nature and Causes of the Wealth of Nations. Lon-
don: W. Strahan, T. Cadell, 1776.
Soto, Hernando de. The Mystery of Capital. New York: Basic Books, 2000.
Syed, Matthew. Rebel Ideas: The Power of Diverse Thinking. New York: Flatiron Books,
2019.
Triana, Pablo. The Number That Killed Us. London: John Wiley, 2011.
Tukey, John W. “The Future of Data Analysis.” Annals of Mathematical Statistics 33
(1962): 1–67.
UBS. Shareholder Report on UBS’s Write-Downs. UBS, 2008.
Viniar, David. “Goldman Pays the Price of Being Big.” Financial Times interview,
2007.
Whitehouse, Kaja. “One ‘Quant’ Sees Shakeout for the Ages—‘10,000 Years.’ ” Wall
Street Journal, 2007.
Yankelovich, David. Corporate Priorities: A Continuing Study of the New Demands on
Business. D. Yankelovich Inc., 1972.
Yellen, Janet. Press conference. Federal Reserve Board, 2014.
267
clearing, 11 deflation, 31
climate change, 85 Deloitte & Touche, 63
Clinton, Bill, 15 Delta Works, 85
Coca-Cola, 154 de Neufville, Leendert Pieter, 6–7, 10,
cognitive failure, 227–32 241
collateral debt obligations (CDOs), 70, deposit insurance, 37, 38, 39, 41–42
133, 134, 139–42, 193, 212, 218 Depository Trust and Clearance Corpo-
conduits, 70–71 ration (DTCC), 115
Confessions of a Subprime Lender (Bitner), Deutsche Bank, 176
142 devaluation, 106
confidence intervals, 79 developing countries, 17–18
confirmation bias, 2 de Vres, Casper, 85
convergence trading, 123 Diamond, Douglas, 39
corporate finance, 237, 251 dinosaur extinction, 4, 113
countercyclical capital buffer, 60 diversity, in financial system: forces
Covid-19 pandemic, 8–12, 113, 114, 179, opposing, 190–91, 249–50, 251; mea-
183, 193, 232; bailouts during, 47, 162, surement of, 191; stability linked to,
234, 237, 241; banking regulation 5, 188, 190, 193, 249, 253
and, 29, 174–75; Chinese response to, divorce-and-margarine fallacy, 89–90
14; financial crisis of 2008 compared Dominican Republic, 34
with, 21, 235; financial markets dur- Dondelinger, Albert, 64
ing, 153, 234; Icelandic economy buf- dot-com bubble (1990s), 154
feted by, 156; macroprudential policy Dunford, Joseph, 181
during, 181, 187; modeling of, 91–92, Dybvig, Philip, 39
100; political response to, 15; safety dynamic replication, 119
vs. growth epitomized by, 21
Credit-Anstalt, 39 “Early Deposit Banking” (Kohn), 56
credit default swaps (CDSs), 141–42, “Econometric Policy Evaluation”
184–85 (Lucas), 107
credit rating, 44, 67, 140 econometrics, 96
Credit Suisse, 144 elasticity of demand, 118
crisis, crises: causes of, 31–34; corruption Engels, Friedrich, 203
linked to, 33–34; costs of, 45–47; fre- Engle, Robert, 86–87, 195, 197
quency of, 9, 32, 33, 50; IMF database Enron, 225
of, 9, 46; textbook example of, 10; Erdogan, Recep, 182
types of, 8 ergodicity, 95, 101, 145, 195
Crockett, Andrew, 121 Ernst & Young, 63, 133
cryptocurrency, 223, 237–38 EURISKO (artificial intelligence sys-
Cuban Missile Crisis (1962), 2 tem), 215
Cyprus, 36, 42–43, 45, 66 Eurodollars, 69
European Banking Authority, 199, 200,
data snooping, 88 202–3
de Cos, Pablo Hernández, 234, 236 European Capital Markets Union, 251