Commercial Banking Risk Management
Commercial Banking Risk Management
Commercial Banking Risk Management
Risk Management
Regulation in the Wake of the Financial Crisis
Edited by
Weidong Tian
Commercial Banking Risk Management
Weidong Tian
Editor
Commercial Banking
Risk Management
Regulation in the Wake of the Financial Crisis
Editor
Weidong Tian
University of North Carolina at Charlotte
Charlotte, North Carolina, USA
One of the most important lessons from the financial crisis of 2007–2008
is that the regulatory supervision of financial institutions, in particular
commercial banks, needs a major overhaul. Many regulatory changes have
been implemented in the financial market all over the world. For instance,
the Dodd-Frank Act has been signed into federal law on July 2010; the
Basel Committee has moved to strengthen bank regulations with Basel
III from 2009; the Financial Stability Board created after the crisis has
imposed frameworks for the identification of systemic risk in the financial
sector across the world; and the Volcker Rule has been adopted formally by
financial regulators to curb risk-taking by US commercial banks. Financial
institutions have to manage all kinds of risk under stringent regulatory
pressure and have entered a virtually new era of risk management.
This book is designed to provide a comprehensive coverage of all impor-
tant modern commercial banking risk management topics under the new
regulatory requirements, including market risk, counterparty credit risk,
liquidity risk, operational risk, fair lending risk, model risk, stress tests,
and comprehensive capital analysis and review (CCAR) from a practical
perspective. It covers major components in enterprise risk management
and a modern capital requirement framework. Each chapter is written by
an authority on the relevant subject. All contributors have extensive indus-
try experience and are actively engaged in the largest commercial banks,
major consulting firms, auditing firms, regulatory agencies and universi-
ties; many of them also have PhDs and have written monographs and
articles on related topics.
v
vi PREFACE
The book falls into eight parts. In Part 1, two chapters discuss regu-
latory capital and market risk. Specifically, chapter “Regulatory Capital
Requirement in BASEL III” provides a comprehensive explanation of the
regulatory capital requirement in Basel III for commercial banks and global
systemically important banks. It also covers the current stage of Basel III
and the motivations. Chapter “Market Risk Modeling Framework Under
Basel” explains the market risk modeling framework under Basel 2.5 and
Basel III. The key ingredients are explained and advanced risk measures
on the market risk management are introduced in this chapter. The latest
capital requirement for the market risk is also briefly documented.
Part 2 focuses on credit risk management, in particular, counter-
party credit risk management. Chapter “IMM Approach for Managing
Counterparty Credit Risk” first describes the methodologies that have
been recognized as standard approaches to tackle counterparty credit
risk and, then uses case studies to show how the methodologies are cur-
rently used for measuring and mitigating counterparty risk at major com-
mercial banks. In the wake of the 2007–2008 financial crisis, one recent
challenge in practice is to implement a series of valuation adjustments in
the credit market. For this purpose, chapter “XVA in the Wake of the
Financial Crisis” presents major insights on several versions of valuation
adjustment of credit risks—XVAs, including credit valuation adjustment
(“CVA”), debt valuation adjustment (“DVA”), funding valuation adjust-
ment (“FVA”), capital valuation adjustment (“KVA”), and margin valua-
tion adjustment (“MVA”).
There are three chapters in Part 3. The three chapters each discuss three
highly significant areas of risk that are crucial components of the modern
regulatory risk management framework. Chapter “Liquidity Risk” docu-
ments in detail modern liquidity risk management. It introduces both
current approaches and presents some forward-looking perspectives on
liquidity risk. After the 2007-2008 financial crisis, the significant role of
operational risk has been recognized and operational risk management has
emerged as an essential factor in capital stress testing. A modern approach
to operational risk management is demonstrated in chapter “Operational
Risk Management”, in which both the methodology and several exam-
ples of modern operational risk management are discussed. Chapter “Fair
Lending Monitoring Models” addresses another key risk management
area in commercial banking: fair lending risk. This chapter underscores
some of the quantitative challenges in detecting and measuring fair lend-
ing risk and presents a modeling approach to it.
PREFACE vii
ix
x ACKNOWLEDGMENTS
Hong Yan. Special thanks go to Junya Jiang, Shuangshuang Ji, and Ivanov
Katerina for their excellent editorial support.
I owe a debt of gratitude to the staff at Palgrave Macmillan for edi-
torial support. Editor Sarah Lawrence and Editorial Assistant Allison
Neuburger deserve my sincerest thanks for their encouragement, sugges-
tions, patience, and other assistance, which have brought this project to
completion.
Most of all, I express the deepest gratitude to my wife, Maggie, and our
daughter, Michele, for their love and patience.
Contents
xi
xii CONTENTS
Liquidity Risk103
Larry Li
Index419
List of Figures
xv
xvi LIST OF FIGURES
xix
Contributors
xxi
xxii CONTRIBUTORS
His current responsibility includes VaR models for volatility skew, specific
risk models, and CVA models. His academic background consists of BS in
mathematics from MIT, and a PhD in mathematics from Princeton.
Douglas T. Gardner is the Head of Risk Independent Review and
Control, Americas, at BNP Paribas, and the Head of Model Risk
Management at BancWest. He leads the development and implementation
of the model risk management program at these institutions, which
includes overseeing the validation of a wide variety of models including
those used for enterprise-wide stress testing. He previously led the model
risk management function at Wells Fargo and was Director of Financial
Engineering at Algorithmics, where he led a team responsible for the
development of models used for market and counterparty risk manage-
ment. Douglas holds a PhD in Operations Research from the University
of Toronto, and was a post-doctoral fellow at the Schulich School of
Business, York University.
Jeffrey R. Gerlach is Assistant Vice President in the Quantitative
Supervision & Research (QSR) Group of the Federal Reserve Bank of
Richmond. Prior to joining the Richmond Fed as a Senior Financial
Economist in 2011, Jeff was a professor at SKK Graduate School of
Business in Seoul, South Korea, and the College of William & Mary, and
an International Faculty Fellow at MIT. He worked as a Foreign Service
Officer for the US Department of State before earning a PhD at Indiana
University in 2001.
Larry Li is an Executive Director at JP Morgan Chase covering model
risk globally across a wide range of business lines, including the corporate
and investment bank and asset management. He has around twenty years
of quantitative modeling and risk management experience, covering the
gamut of modeling activities from development to validation for both
valuation models and risk models. Larry is also an expert in market risk,
credit risk, and operational risk for the banking and asset management
industries. He has previously worked for a range of leading financial firms,
such as Ernst & Young, Ospraie, Deutsche Bank, and Constellation
Energy. Larry has a PhD in finance and a master’s degree in economics
from the University of Toronto. He has also held the GARP Financial Risk
Manager certification since 2000.
Kevin D. Oden is an executive vice president and head of Operational
Risk and Compliance within Corporate Risk. In his role, he manages
CONTRIBUTORS xxiii
Weidong Tian
Introduction
The major changes from Basel II (BCBS, “International Convergence
of Capital Measurement and Capital Standards: A Revised Framework –
Comprehensive Version”, June 2006; “Enhancements to the Basel II
framework”, July 2009; “Revisions to the Basel II market risk framework”,
July 2009) to Basel III (BCBS, “Basel III: A global regulatory frame-
work for more resilient banks and banking systems”, December 2010 (rev.
June 2011); BCBS, “Basel III: International framework for liquidity risk
measurement, standards and monitoring”) are the shifts largely from risk
sensitive to capital intensive in the perspective of risk management.1 By
risk sensitive we mean that each type of risk—market risk, credit risk, and
operational risk—is being treated separately. These three types of risks are
three components of Basel II’s Pillar 1 on Regulatory capital.2 By con-
trast, a capital sensitive perspective leads to a more fundamental issue,
that of capital, and enforces stringent capital requirements to withstand
severe economic and market situations. This capital concept is extended to
be total loss-absorbing capacity (TLAC) by the Financial Stability Board
(FSB) report of November 2014, “Adequacy of Loss-absorbing Capacity
W. Tian (*)
University of North Carolina at Charlotte, Charlotte, NC, USA
e-mail: [email protected]
• What is Capital?
• Why Capital is important for a bank?
• Capital requirement in Basel III.
• Capital Buffers and Capital Adequacy Framework in Basel III.
• Capital as Total Loss-Absorbing Capacity and Global Systemically
Important Banks (G-SIBs) Surcharge.
I start with the concept of capital for a bank and address other ques-
tions in the remainder of this chapter.
Roughly speaking, capital is a portion of a bank’s assets that is not
legally required to be repaid to anyone or have to be paid but only
very far in the future. By this broad definition, capital has the low-
est bankruptcy priority, the least obligation to be repaid and the most
highly liquid asset. Common equity is obviously the best capital. Besides
common equity, retained earnings and some subordinated debts with
REGULATORY CAPITAL REQUIREMENT IN BASEL III 5
Among these key changes, this chapter focuses on the capital and capital
requirement (1) and briefly discusses (3). Chapters “Market Risk Modeling
Framework Under Basel” and “Operational Risk Management” cover rel-
evant detailed materials in this area.3 The risk coverage in Basel III is dis-
cussed in details in chapters “Market Risk Modeling Framework Under
Basel” and “XVA in the Wake of the Financial Crisis”. Finally, chapter
“Liquidity Risk” discusses the liquidity risk management.4
Tier 1 Capital
Specifically, Tier 1 capital is either common equity Tier 1 capital (CET 1)
or additional Tier 1 capital.
main points here. (1) It is entitled to a claim on the residual assets that
is proportional with its share of issued capital; (2) it must be the most
subordinated claim in liquidation of the bank; (3) its principal is perpetual
and never repaid outside of liquidation; (4) the bank does nothing to cre-
ate an expectation at issuance that the instrument will be bought back,
redeemed, or cancelled. Lastly, the distributions are paid out of distributed
items and distributions are never obligated and paid only after all legal and
contractual obligations to all more senior capital instruments are resolved;
these instruments are loss-absorbing on a going-concern basis, and the
paid in amount is recognized as capital (but not as liability) for determin-
ing balance sheet insolvency.
However, banks must determine common equity Tier 1 capital after
regulatory deduction and adjustments, including:
Tier 2 Capital
Tier 2 capital is a gone-concern capital and its criteria has been revised
through several versions (See, BCBS, “Proposal to ensure the loss absor-
bency of regulatory capital at the point of non-viability”, August 2010).
Specifically, Tier 2 capital consists of instruments issued by the bank, or
consolidated subsidiaries of the bank and held by third parties, that meet the
12 W. TIAN
criteria for inclusion in Tier 2 capital or the stock surplus (share premium)
resulting from the issue of instruments included in Tier 2 capital.
Since the objective of Tier 2 capital is to provide loss absorption on a
gone-concern basis, the following set of criteria for an instrument to meet
or exceed is stated precisely in BCBS, June 2011:
The instrument must be issued and paid-in, subordinated to depositors
and general creditors of the bank. Maturity is at least five years and recog-
nition in regulatory capital in the remaining five years before maturity will
be amortized on a straight-line basis; and there are no step-ups or other
incentives to redeem. It may be called after a minimum of five years with
supervisory approval but the bank does not create an expectation that the
call will be exercised. Moreover, banks must not call the exercise option
unless it (1) demonstrates that this capital position is well above the mini-
mum capital requirement after the call option is exercised, or (2) banks
replace the called instrument with capital of the same or better quality and
the replacement of this capital is done at conditions which are sustainable
for the income capacity of the bank. The dividend/coupon payment is
not credit sensitive. Furthermore, the investor has no option to accelerate
the repayment of future scheduled payments (either coupon/dividend or
principal) except in bankruptcy and liquidation.
Moreover, there are two general provisions for bank’s Tier 2 capital. When
the bank uses the standardized approach for credit risk (in Basel II, see chap-
ter “IMM Approach for Managing Counterparty Credit Risk” for a modern
approach to credit risk), provisions or loan-loss reserve held against future
are qualified for inclusion with Tier 2, and provision ascribed to identified
deterioration of particular assets or liabilities should be excluded. However,
the general provisions/general loan-loss reserves eligible for inclusion in Tier
2 is limited to a maximum of 1.25% of credit risk-weighted calculation under
the standardized approach. Second, for the bank under the internal rating-
based (IRB) approach, the total expected loss amount may be recognized
as the difference in Tier 2 capital up to a maximum of 0.6% of credit risk-
weighted assets calculated under the IRS approach.
Risk-Weighted Assets
By definition, risk-weighted asset is a bank’s asset weighted according to its
risk. Currently, market risk, credit risk, and the operational risk compose
Pillar 1 of regulatory capital. The idea to provide different risk weights
to different kinds of assets is suggested in Basel I (“BCBS, 1988, Basel
Accord”), intended to provide a straightforward and robust approach as a
global risk management standard. It also allows capturing the off balance
sheet exposures within this risk-weighted approach.
RWA is the sum of the following items
(A) Credit RWAstandardize: This is the risk-weighted assets for credit risk
determined by the standardized approach in Basel II. Using this
approach, assessments from quantifying external rating agencies
are used to define the risk weight such as (1) claims on sovereigns
and central banks; (2) claims on non-central government public
sector entities; (3) claims on multilateral development banks; (4)
claims on banks and securities firms; and (5) claims on corporates.
It should be noticed that on balance sheet exposures under the
standardized approach are normally measured by their book value.
(B) Credit RWAIRB: This is the risk-weighted assets for credit risk
determined by the Internal Rating Based (IRB) approach in Basel
II. Under the IRB approach, the risk weights are a function of four
variables and the types of exposures (such as corporate, retail,
small- to medium-sized enterprise, etc.). The four variables are:
14 W. TIAN
Two IRB approaches are used differently. (1) In the foundational inter-
nal rating based (FIRB) approach, PD is determined by the bank while
other variables are provided by regulators. (2) In the advanced internal
rating based (AIRB) approach, banks determine all variables.
Leverage Ratios
As a complement to risk-based capital requirement, leverage ratios are also
introduced to restrict the build-up of leverage in the banking sector to
avoid destabilizing deleveraging process. It is yet a simple, non-risk-based
“backstop” measure to reinforce the risk-based requirement imposed by
the above capital ratios.
Briefly, the leverage ratio in Basel III is defined as the capital measure
divided by the exposure measure. The leverage ratio minimal ratio is 3%
for Basel III.
Tier 1 capital
Leverage ratio =
Total exposure
• higher of (1) previous day’s VaR number and (2) an average of daily
VaR measures on each of preceding 60 business days (VaRavg), mul-
tiplied by a multiplication factor; and
• higher of (1) the latest available stressed VaR number and (2) an
average of stressed VaR numbers over the preceding 60 business
days, multiplied by a multiplication factor.
18 W. TIAN
capital 2% would not only meet minimum capital requirements, but also
would have a capital conservation buffer of 2.5%.
In precise terms, the capital conservation buffer is calculated as follows.
We first calculate the lowest of the following three ratios: (1) the com-
mon equity Tier 1 capital ratio minus 4.5%; (2) the tier 1 capital minus
6.0%; and (3) the total capital ratio minus 8.0%. If this resulting number
is greater than 2.5%, it is understood that the capital conservation buf-
fer is achieved and this amount of capital conservation buffer program
is expected to be fully transitioned by 2018; in other words, the capital
conservation buffer is 2.5% of risk weighted assets.
Third, the capital conservation buffer is time-varying. When the capital
buffers have been drawn down, the bank needs to look to rebuild them
through reducing discretionary distributions of earnings. And greater
efforts should be made to rebuild buffers the more the capital buffers
have been deleted. Therefore, a range of capital buffers is used to impose
the capital distribution constraint. Namely, 2.5% of capital conservation
buffer constraint is imposed on the discretionary distributions of earn-
ings when the capital levels fall within this range, but the operation of the
bank is normal.
I next explain how the capital conservation buffer affects the earnings
in Basel III annually. We notice that the Federal Reserve imposes simi-
lar restrictions on the eligible retained income quarterly. We also notice
22 W. TIAN
G-SIB Surcharge
The capital adequacy measures introduced in Basel III are required for all
internationally active banks to insure that each bank maintains an appro-
priate level of capital with regard to its own exposure. Among these banks,
REGULATORY CAPITAL REQUIREMENT IN BASEL III 25
(A) Size
The total exposures defined above is used to represent the
“size”. The reason is that the distress or failure of a large bank is
more likely to damage confidence in the financial system. Size is a
key measure of systemic importance and plays a key role in under-
standing the “too big to fail” issue.
(B) Interconnectedness
Financial distress at one big bank can materially increase the
likelihood of distress at other banks given the network of contrac-
tual obligations in which these firms operate. A bank’s systemic
impact is thus likely to be positively related to its interconnected-
ness with other banks. There are three identified indicators in this
category: (1) intra-financial system assets; (2) intra-financial sys-
tem liabilities; and (3) securities outstanding.
(C) Substitutability/financial institution infrastructure
Three indicators are used to measure substitutability/financial
institution infrastructure: (1) asset under custody; (2) payments
activity; and (3) underwritten transactions in debt and equity
markets. The motivation of this category is that the systemic
impact of a bank’s distress or failure is expected to be negatively
related to its degree of substitutability/financial institution infra-
structure as both a market participant and client service provider,
in other words, it is expected to be positively related to the extent
to which the bank provides financial institution infrastructure.
(D) Global cross-jurisdiction activity
Two indicators in this category are used to measure the impor-
tance of the bank’s activities outside its home (headquarters) juris-
diction relative to overall activity of other banks in a sample of
banks (see below): (1) cross-jurisdictional claims; and (2) cross-
jurisdictional liabilities.
(E) Complexity
This last category is expected to be positively related to the
systemic impact of a bank’s distress or failure. The more complex
a bank is, the greater are the costs and time needed to resolve the
bank. Three indicators are employed in this complexity category:
(1) notional amount of over-the-counter derivatives; (2) Level 3
assets; and (3) trading and available-for-sale securities.
REGULATORY CAPITAL REQUIREMENT IN BASEL III 27
Sample of Banks
A
The indicator-based measurement approach uses a large sample of banks as
a proxy for the global banking sector. The criteria of selecting the bank in
this sample of banks are as follows.
• not be subject to set off or netting rights that would undermine their
loss-absorbing capacity in resolution;
• have a minimum remaining contractual maturity of at least one year
or be perpetual;
• not be redeemable by the holder prior to maturity in general; and
• not be funded directly or indirectly by the resolution entity or a
related party of the resolution entity.
To some extent, the TLAC instrument extends the capital concept in the
Basel framework; most capitals that count towards satisfying the minimum
regulatory capital requirement also count towards satisfying the minimum
TLAC. For the relation between TLAC and capital requirement, we refer
to the “Total Loss-absorbing capacity (TLAC) Term Sheet”, November
2015.
In summary, Figure 1 illustrates the capitals, capital ratios, and leverage
ratios proposed in Basel III. The exposure in the leverage ratio is the same
as Tier 1 capital. In additional to capital ratios (CET1, Tier 1, and Tier 1
plus Tier 2), conservation buffer, countercyclical, and G-SIBs buffer are
also introduced.
Capital RWA
Credit
Total Tier 2 Conservation
Exposure Tier 1
Exposure
Common Alternative
Equity Countercyclical
Tier 1
G-SIBs Market
Operational
of asset had been assigned a different weight—0, 0.2, 0.5, and 1—with
the safest assets receiving the lowest number and the riskiest assets receiv-
ing the highest number. As has been explained in this chapter, Basel II
and Basel III have improved significantly on the risks since the inception
of Basel I.
Conclusions
While adequate capital management has been a crucial risk manage-
ment tool for commercial banks, and commercial banks are required
to implement the major components proposed by Basel III and other
regulatory or law requirements (Fed, OCC, FDIC, Dodd-Frank Act in
US, and EBA in Europe), we have good reason to believe that some
further revisions will be made along the way before January 2023—the
official deadline of Basel III—to reflect concerns raised from practitio-
ners and academics.
For example, two fundamental questions, among many things, are still
subjects to be debated. First, even though we all agree on the characteriza-
tion and significance of capital, how much capital is essentially necessary
and what is its optimal level? Some argue that the common equity capital
ratios should be very high, say 30–50%; while others suggest that a high
capital ratio hurts the social optimum so that the capital ratio should be
reasonably low. Second, whether or not some total loss-absorbing capac-
ity instruments should be treated as regulatory capitals, and, in particular,
whether contingent capital should be treated as Tier 1 capital in comput-
ing capital ratios? It is also important to design loss-absorbing instru-
ments to be consistent with adequate capital requirement.
Other chapters in this book will address some of the most important
practices in current regulatory risk management systems. But there are still
many unresolved issues and those discussions are beyond the scope of this
chapter and this book.
Notes
1. Between Basel II and Basel III, a so called Basel II ½ also modifies the
existing Basel II. For Basel II ½, we refer to BCBS, “Enhancements to the
Basel II market framework”, July 2009; and BCBS, “Revision to the Basel
II market framework”, July 2009.
32 W. TIAN
2. Economical capital and other risks such as liquidity risk and legal risk are
addressed differently. For the second Pillar and the third Pillar, see BCBS,
“International Convergence of Capital Measurement and Capital
Standards: A Revised Framework—Comprehensive Version”, Part 3 and
Part 4, June 2006.
3. For the leverage ratio in Basel III, see BCBS, “Basel III: A global regula-
tory framework for more resilient banks and banking s ystem”, June 2011;
BCBS, “Basel III leverage ratio framework and disclosure requirement”,
January 2014.
4. For the liquidity risk in Basel III framework, see BCBS, “Basel III: The
Liquidity Coverage Ratio and Liquidity Risk Monitoring Tools”, Janua ry
2013; BCBS, “Basel III: The Net Stable Funding Ratio”, October 2014.
5. DVP stands for delivery-versus-payment. Non-DVP trading is defined as
securities trading where a client’s custodian will have to release payment or
deliver securities on behalf of the client before there is certainty that it will
receive the counter-value in cash or securities, thus incurring settlement
risk. By the same token, PVP stands for payment-versus-payment, and
non-PVP represents non-payment-versus-payment.
6. CPSS represents the Committee on Payment and Settlement Systems and
IOSCO stands for the Committee of the International Organization of
Securities Commissions.
7. See BCBS, “Guidance for national authorities operating the countercycli-
cal capital buffer”, December 2010.
8. US regulators strongly favor common equity Tier 1 only but some
European jurisdictions allow high-trigger contingent capital to be used in
the capital requirement. For instance, Swiss regulators have already called
for the country’s two largest lenders, Credit Suisse Group AG and UBS
AG, to issue contingent capitals in addition to meeting a three percentage-
point surcharge
9. See FSB, “Reducing the moral hazard posed by systemically important
financial institutions”, October 2010; FSB, “Progress and Next Steps
Towards Ending ‘Too-Big-To-Fail’ (TBTF)”, September 2013; FSB,
“Adequacy of loss-absorbing capacity of global systemically important
banks in resolution”, November 2014; and FSB, `“Principles on Loss-
absorbing and Recapitalisation Capacity of G-SIBs in Resolution—Total
Loss-absorbing Capacity (TLAC) Term Sheet”, November 2015.
10. Its long title is “An Act to prompt the financial stability of the United
States by improving accountability and transparency in the financial sys-
tem, to end ‘too big to fail’, to protect the American taxpayer by ending
REGULATORY CAPITAL REQUIREMENT IN BASEL III 33
References
1. BCBS, “International Convergence of Capital Measurement and Capital
Standards”, July 1988. Basel I (the Basel Capital Accord).
2. BCBS, “Basel II: International Convergence of Capital Measurement and
Capital Standards: a Revised Framework”, June 2004.
3. BCBS, “Basel II: International Convergence of Capital Measurement and
Capital Standards: a Revised Framework – Comprehensive Version”, June
2006.
4. BCBS, “Enhancements to the Basel II Market Framework”, July 2009.
5. BCBS, “Revision to the Basel II Market Framework”, July 2009.
6. BCBS, “Proposal to Ensure the Loss Absorbency of Regulatory Capital at the
Point of Non-Viability”, August 2010.
7. BCBS, “Guidance for National Authorities Operating the Countercyclical
Capital Buffer”, December 2010.
8. BCBS, “Basel III: A Global Regulatory Framework for More Resilient Banks
and Banks System”, December 2010, (rev June 2011).
9. BCBS, “Basel III: International Framework for Liquidity Risk Measurement,
Standards and Monitoring”, December 2010.
10. BCBS, “Basel Committee on Banking Supervision (BCBS) Charter”, 2013.
11. BCBS, “Basel III: The Liquidity Coverage Ratio and Liquidity Risk Monitoring
Tools”, January 2013.
12.
BCBS, “Global Systemically Important Banks: Updated Assessment
Methodology and the Higher Loss Absorbency Requirement”, July 2013.
13. BCBS, “Basel III Leverage Ratio Framework and Disclosure Requirement”,
January 2014.
14. BCBS, “Basel III: The Net Stable Funding Ratio”, October 2014.
15. BCBS, “A Brief History of Basel Committee”, October 2015.
16. FSB, “Reducing the Moral Hazard Posed by Systemically Important Financial
Institutions”, October 2010.
34 W. TIAN
17. FSB, “Progress and Next Steps Towards Ending “Too-Big-To-Fail” (TBTF)”,
September 2013.
18. FSB, “Adequacy of Loss-Absorbing Capacity of Global Systemically Important
Banks in Resolution”, November 2014.
19. FSB, “Principles on Loss-Absorbing and Recapitalisation Capacity of G-SIBs
in Resolution – Total Loss-Absorbing Capacity (TLAC) Term Sheet”,
November 2015.
20. Lebor, Adam, “The Shadowy History on the Secret Bank that Runs the
World”, PublicAffairs, New York, 2013.
Market Risk Modeling Framework
Under Basel
Han Zhang
INTRODUCTION
Market Risk is the risk that the value of the bank’s trading portfolio can
decrease due to moves in market factors such as equity prices, interest
rates, credit spreads, foreign-exchange rates, commodity prices, and other
indicators whose values are set in a public market. The risk of losses in
both on and off balance sheet positions come from market risk factors’
movement in financial instruments. From the regulatory perspective, the
market risk should be managed by the regulatory capital to reduce the
market risk of each bank and the bankers’ risk-taking incentive; and thus
stabilize the banking sector as a whole. By contrast, individual banks also
implement the economical capital for their portfolios.
Since the financial crisis of 2007–2008, market risk management has
become more important than ever. Many advanced risk measures and
capital charge for market risk are proposed in a comprehensive capital
framework.
The view expressed in chapter represents only the personal opinion of author and
not those of Wells Fargo & Co.
H. Zhang ()
Wells Fargo & Co., San Francisco, CA, USA
Introduction to Basel
The Basel Committee on Banking Supervision (BCBS) is a committee
of banking supervisory authorities that was established by the central
bank governors of the Group of Ten countries in 1974. It is the pri-
mary global standard-setter for the prudential regulation of banks and
provides a forum for cooperation on banking supervisory matters. Its
mandate is to strengthen the regulation, supervision, and practices of
banks worldwide with the purpose of enhancing financial stability. The
committee is called Basel since the BCBS maintains its secretariat at the
Bank for International Settlements in Basel, Switzerland and the com-
mittee normally meets there.
BCBS issues Basel Accords (recommendations on banking regula-
tions). Three Basel Accords are in existence today – Basel I (1988), Basel
II (2007), and Basel III (2010–11).
The 1988 Accord was primarily focused on credit risk and appropriate
risk-weighting of assets. Even though market risk was introduced there,
MARKET RISK MODELING FRAMEWORK UNDER BASEL 37
the 1988 Accord did not require banks to set aside any capital to cover
potential losses from market risk. The BCBS was well aware of this issue
and saw the 1988 Accord as the first step to establishing a more compre-
hensive regulatory capital framework.
1. The accord suggested a general market risk and specific risk frame-
work. The general market risk refers to changes in market values due
to general market movements. Specific risk refers to changes in the
value of an individual asset due to factors related to the issuer of the
security, which is not reflected in general market movements.
2. Market risk can be calculated in two different ways: either with the
standardized Basel model or with internal value-at-risk (VaR) mod-
els of the banks. These internal models can only be used by the larg-
est banks that satisfy qualitative and quantitative standards imposed
by the Basel agreement.
1. Banks using proprietary models must compute VaR daily, using 99th
percentile, one-tailed confidence interval with a time horizon of ten
trading days using a historical observation period of at least one year.
2. The capital charge for a bank that uses a proprietary model will be
the higher of the previous day’s VaR and a multiplication factor (at
an absolute minimum of 3, with a ‘plus factor’ from 0 to 1) times
the average of the daily VaR of the preceding 60 business days.
3. Use of ‘backtesting’ (ex-post comparisons between model results
and actual performance) to arrive at the ‘plus factor’ that is added to
the multiplication factor of three.
38 H. ZHANG
With the issuing of this amendment, banks first won permission to use
internal models to meet the BCBS capital requirement. From 1998, VaR
has become established as the industry and regulatory standard in measuring
market risk, although the internal model approach in general leads to lower
capital charge compared to the prior method of applying fixed risk weights
to different asset classes. BCBS accepted this approach with the view that the
models can reflect the benefits of risk diversification strategies and provide
incentives for firms to develop and implement sound risk management tools.
1996 also marked the time at which regulators, banks’ management
and risk management practice began to be more influenced by quantita-
tive risk modeling, and a new profession of risk quant was in the making.
Basel II
Basel II was revised in between June 2004 and June 2006 (see [3]-[5]),
with following major updates:
a. credit risk
b. market risk
c. operational risk.
Basel II of 2004 copied and pasted the capital charge for market risk of
the Basel I amendment of 1996.
In the later adapted US rules, banks were asked to implement Basel 2.5
by January 2013; for banks that have prior approved specific risk mod-
els, the approval was extended by one year and specific risk models were
reviewed by US regulators in 2013.
Basel 2.5 is used to as a tactic solution to boost regulators’ confidence
in the capital framework, among all the various changes. Here are a couple
of noticeable changes/additions:
• Risk factors
– Factors deemed relevant in pricing function should be included in
VaR model (otherwise justification to supervisor)
– Non-linearities for options and other relevant products (e.g.
mortgage-backed securities, tranched exposures, or nth-to-default
credit derivatives)
– Correlation risk
– Basis risk (e.g. between credit default swaps and bonds)
– Proxies used should show a good track record for the actual posi-
tion held (i.e. an equity index for a position in an individual stock).
Stressed VaR
Stressed VaR is a new measure added in the Basel 2.5 framework; it is
designed with the purpose to capital the possible risk under stressed condi-
tions. Here are several features of the stressed VaR:
• The current VaR methodology should be used for stressed VaR cal-
culation. The intention is to replicate the VaR calculation that would
be generated on the bank’s current portfolio if the relevant market
risk drivers were experiencing a period of stress:
– Applied to all positions covered under VaR, with consistent meth-
odology of 99% and a ten-day horizon,
– The back history is calibrated to historical continuous 12-month
period of significant financial stress,
– Since stressed VaR is product or Line of Business (LOB) specific, dif-
ferent roll-ups may request different stress periods and calibrations.
42 H. ZHANG
• the historical market data availability can be different for these two
periods, and the stress period in general will have less data availabil-
ity, caused by less liquidity in the market conditions, or new prod-
ucts invented after the stress period. Even where data is available, to
retrieve and prepare data going back to, for example, 2007 is a much
bigger task compared to dealing with data for just the last two or
four years,
• the historical market data shifts in the stressed period are in general
larger than the shifts from the regular retrospective period, which
potentially can pose a challenge to the bank’s risk P&L estimation
method, especially if the financial institution is still using delta-
gamma approximation.
But overall, stressed VaR has already started to post more and more
challenges into the data modeling given the stressed period resultant from
2008 is aging every day.
• IRC is used to capture credit default risk, which has previously been
covered by specific risk (SR) models
MARKET RISK MODELING FRAMEWORK UNDER BASEL 43
• In the IRC model, the default risk is calculated for a one-year time
horizon, changed from the ten-day one in the SR model under the
old rules. The quantile is also pushed further into the tail from 99%
to 99.9%, with a constant level of risk definition; these changes dra-
matically increased the capital charges for credit default risks.
• The newly developed CRM model captures all risks for correlation
trading (synthetic CDO, FTD, and NTD), and the standardized
model set a floor equivalent to 8% of standard charge, and CRM
itself has been proved to be a very difficult thing to model.
tool to manage these model limitations and flaws more efficiently, and
also to increase the transparency in these deficits for model stakeholders,
including regulators. In the context of VaR, although the entire gen-
eral VaR framework can be approved, the quality of risk modeling varies
among different products depend on the following:
• the front office pricing model which the VaR is built on has different
qualities;
• the historical market data availability and qualities for different prod-
ucts or even trades are different;
• the risk pricing methodology.
As part of the Basel 2.5 market risk MRA of most of banks, the follow-
ing items are very often mentioned:
Value at Risk
Value-at-risk is a measure of potential loss level for a given investment
portfolio estimated with a certain confidence level in a certain period of
time, it basically answers the question that how bad things can get. If a
bank’s ten-day 99% VaR is $3 million, it means that in ten days, there is a
1% chance that bank’s losses could exceed $3 million.
For the development of VaR methodology, two things are worth noting,
• covariance approach,
• historical simulation, which is now a widely used method,
• and Monte Carlo simulation.
46 H. ZHANG
Expected Shortfall
Just two years after VaR was adapted by Basel in the capital calculation,
academic researchers in 1998 began to criticize that VaR has fundamen-
tal structure flaws, and said it should be replaced by coherent risk mea-
sures. In 2001, expected shortfall (ES) was adapted worldwide to be used
side-by-side with VaR. For the next fifteen years, there were many academic
debates about whether VaR should be replaced by expected shortfall .
Expected shortfall is also called conditional VaR or tail loss; it is a coher-
ent risk measure. ES estimates the expected return on a portfolio of the
loss tail, thus it is more sensitive to the shape of the loss distribution in
the tail. Since ES provides average loss in a worst case scenario, it can be
MARKET RISK MODELING FRAMEWORK UNDER BASEL 47
This effort will also promote the alignment between the risk views
with the trading views on risk, and give regulators more insights into the
48 H. ZHANG
trading business while they are reviewing the risk models; at the same time
this also will promote higher standards at front office in their pricing and
data modeling practice.
The next thing worth mentioning in the new IMA is the modellable and
non-modellable risk factor; it formalizes an approach when a risk factor can-
not be properly modelled in the IMA. As a general practice in the current
RNiV process, a stress scenario-based approach is widely used for RNiV
quantitative estimation; the new rule asked that “Each non-modellable risk
factor is to be capitalized using a stress scenario that is calibrated to be at
least as prudent as the expected shortfall calibration used for modelled
risks.” This can formally move most items in the RNiV inventory into the
modeling framework with a workable guideline on the practice.
The new framework also reduces the benefit of the diversification and
hedge effect in the IMA approach across the asset classes, but the impact
still needs to be checked in the future.
CONCLUSION
As can be seen from recent history, regulations and capital calculation
methodology did evolve with the financial crisis, and as a result, the indus-
try would also be reshaped by the new regulations. After the publishing of
the new regulations in January 2016, the financial industry and regulators
still need time to set down the new Minimum Capital Requirements for
Market Risk; we do believe the new rules in general provide better regu-
lation compared to Basel 2.5, and they have addressed several structural
issues observed in the last couple of years. As for the impacts of the new
rules to the financial industry, there are theories, but the real impact will
still need to be seen in the coming years.
MARKET RISK MODELING FRAMEWORK UNDER BASEL 51
NOTES
1. BCBS, “International Convergence of Capital Measurement and Capital
Standards”, July 1988. Basel I (the Basel Capital Accord).
2. BCBS, “Amendment to the Capital Accord to Incorporate Market Risks”,
January 1996 (Amendment). See http://www.bis.org/publ/bcbs24.htm.
3. BCBS, “Basel II: International Convergence of Capital Measurement and
Capital Standards: a Revised Framework”, June 2004; BCBS, “Amendment
to the Capital Accord to Incorporate Market Risks”, Updated November
2005; BCBS, “Basel II: International Convergence of Capital Measurement
and Capital Standards: a Revised Framework – Comprehensive Version”,
June 2006.
4. BCBS, “Revisions to the Basel II market risk framework – final version”,
July 2009.
5. Markowitz, Harry, M., Portfolio Selection, Journal of Finance, 7 (1), 77–91,
1952.
6. BCBS, “Minimum Capital Requirements for Market Risk” (Standards),
January 2016.
7. T. Gneiting, Making and evaluating point forecasts, Journal of the American
Statistical Association, 106(494):746–762, 2011.
8. C. Acerbi and B. Szekey, Backtesting Expected Shortfall, December 2014.
REFERENCES
1. BCBS, “International Convergence of Capital Measurement and Capital
Standards”, July 1988. Basel I (the Basel Capital Accord).
2. BCBS, “Amendment to the Capital Accord to Incorporate Market Risks”,
January 1996 (Amendment).
3. BCBS, “Basel II: International Convergence of Capital Measurement and
Capital Standards: A Revised Framework”, June 2004.
4. BCBS, “Amendment to the Capital Accord to Incorporate Market Risks”,
Updated November 2005.
5. BCBS, “Basel II: International Convergence of Capital Measurement and
Capital Standards: A Revised Framework – Comprehensive Version”, June
2006.
6. BCBS, “Revisions to the Basel II market risk framework - final version”, July
2009.
7. Markowitz, Harry, M., Portfolio Selection, Journal of Finance, 7 (1), 77-91,
1952.
8. BCBS, “Minimum Capital Requirements for Market Risk” (Standards), January
2016.
52 H. ZHANG
Demin Zhuang
Introduction
The counterparty risk of a bank is the risk of economic loss due to the
default of its counterparty of either an over-the-counter (OTC) derivative
trade or a transaction of securities financing before the final settlement of
the transaction's cash flows. A recent striking example is in 2008, when
several credit events happened within a one-month time period: at Fannie
Mae, Freddie Mac, Lehman Brothers, Washington Mutual, Landsbanki,
Glitnir, and Kaupthing. It is apparent that counterparty risk is one of
the major drivers of the financial crisis we experienced in 2007–2008.
Counterparty credit risk (CCR), which causes economic loss due to coun-
terparty default and credit rating downgrade, is an essential part of finan-
cial risk management in most financial institutions. It is similar to other
forms of credit risk in many aspects, however, a distinct feature of CCR,
the uncertainty of exposure of a bank to its counterparties, sets it apart
significantly from other forms of credit risk.
The view expressed in chapter represents only the personal opinion of author and
not those of his current and previous employers.
D. Zhuang (*)
Citigroup, 1 Court Square, 35th Floor, Long Island City, NY 11120, USA
e-mail: [email protected]
When a counterparty defaults, the bank must close out its positions
with the defaulting counterparty. To determine the loss arising from
the counterparty’s default, it is often assumed that the bank enters into
a similar contract with the counterparty in order to maintain its market
position. Since the bank’s market position is unchanged after replacing
another contract, the loss is then determined by the contract’s replace-
ment cost at the time of default. If the contract value is negative for the
bank at the time of default, the bank closes out the position by paying the
defaulting counterparty the market value of the contract, while entering
into a similar contract with other counterparty and receiving the market
value of the contract—thus it has a net loss of zero. On the other hand, if
the contract value is positive at the time of the counterparty default, then
the bank closes out the position with zero recovery, enters into a similar
contract with other counterparty but pays market value of the contract.
In this case, the net loss of the bank is non-zero and is in fact the market
value of the contract.
Notice that plausible replacement of the contract implies that there is
an adequate liquidity of the contract in the market. However, loans rarely
have a liquid secondary market. It is difficult to determine the replace-
ment cost of a contract. For this reason, counterparty risk of these types of
transactions is not addressed in this chapter.
In order to manage counterparty credit risk, Basel II and III set out
specific requirements for risk capital calculation related to OTC and secu-
rities financing transactions (SFT) such as asset loans and repo, and reverse
repo agreements, with exposures implied by the potential one-year hori-
zon counterparty default. Standard approach (SA) and internal model
methods (IMM) are two different approaches for measuring counterparty
credit risk subject to the approvals from regulators. This chapter focuses
on the IMM approach.
• Interest rates;
• Credit spreads;
• FX rates;
• Stock and index spot prices;
• Commodity future prices;
• Volatility surfaces;
• Correlation skews for CDO tranche trades.
dxi
= µi dt + σ i dWi ( t ) ,
xi
where the drift μi, volatility σi and correlations ρij are assumed to be
constant.
Volatilities and correlation parameters should be periodically updated
from historical data.
The simulation dynamics can be different for different market factors. In
fact, different financial institutions do use dynamics other than lognormal for
market factor simulations.1 The key here is that the IMM model developers
should be able to convince regulators of the conceptual soundness, robust-
ness, and stability of the model framework for chosen simulation dynamics.
Implementation of the Components
The implementation of these three components described above is pre-
sented as follows:
Let P be the portfolio of trades against a particular counterparty.
Assume that P consists of N related trades. These trades are denoted by
A1 , A2 , … , AN. For simplicity, I do not take the margining into consid-
eration for now. The valuations of these instruments on a discrete set of
future simulation dates are performed based on simulated market factors
at these future time points. Let t0 be the valuation date (current date) and
t1 , t2 , … , tn be the set of dates in the future at which we simulate market
62 D. ZHUANG
risk factors, where tn is the longest maturity of the trades. More specifi-
cally, let Mj be the time period (in units of years) between the valuation
day and maturity of trade Aj for j = 1 , 2 , … , N.
Let M be the longest time period defined as
M = max {M j : j = 1, 2,…, N } .
If the simulation is carried out on a set of equal length time grids, then
we can easily determine the number of steps required for the simulation
process as follows:
IMM APPROACH FOR MANAGING COUNTERPARTY CREDIT RISK 63
Let l be the length of the unit time interval of the simulation. Then the
required number of time steps for the simulation process, denoted by n, is
given by the following:
M
n = + 1 .
l
The length of time step needs not to be constant throughout the simu-
lation process. It makes sense to use more granular time steps in the begin-
ning of the simulation, which indicates our confidence in the simulated
MTM values. For example, we can have predefined time step granularity
as follows:
N ˜ (m)
P(
m)
( tk ) = ∑ A j ( tk ) ,
j =1
64 D. ZHUANG
˜ (m)
where A j ( tk ) is defined as
˜ (m) A(j
m)
( tk ) , if nettable
Aj ( tk ) = { + .
A(j m ) ( t k ) , otherwise
Step 1 Compute the portfolio’s value at time t0. This value should
match the market price of the portfolio. This is checked by
“tolerance tests”;
Step 2 Set m = 1;
Step 3 Call market factors simulation process to generate values of
relevant market factors over the time interval [t0, t1]
Step 4 Compute the value of the portfolio at time t1;
Step 5 Repeat Step 3 to Step 4 for time steps t1 , t2 , … , tn to
compute
P(1)(t2) , P(1)(t3) , P(1)(t4) , … , P(1)(tn),
Step 6 Set m = m + 1. Repeat the scenario simulation process (Step
2 to Step 5) S times to obtain the following
{
P ( m ) ( t0 )
+ S
} {
, P ( ) ( t1 )
m =1
m
+ S
} {
, P ( ) ( t2 )
m =1
m
+ S
m =1 } {
,…, P ( ) ( t n )
m
}
+ S
m =1
The α-th percentile of the above sequences, denoted by PFEα(tk), k = 0,
1, 2,..., n, form the PSE profile of the portfolio with the confidence inter-
val α. The peak PFE at the confidence level α, denoted by PFEα, is given
by the following formula:
1 m =1 ( m ) +
EPE ( t k ) = ∑ P ( tk )
S S
{
effective EEt = max effective EEtk −1 ,EEtk }
The effective EPE is defined as the average effective EE during the
first year of future exposure. If all contracts in the netted portfolio mature
within less than one year, then effective EE is defined as the average of
effective EE until all contracts in the portfolio mature.
We also compute effective EPE as a weighted average of effective EE
as follows:
k =1
effectiveEPE = ∑ effectiveEEtk × k
min{1 y ,maturity}
where Δk = tk − tk − 1. Note that the weights Δk allow for the case when
future exposure is calculated at dates that are not equally spaced over time.
66 D. ZHUANG
The Basel II framework allows for the use of a shortcut method, with
simple and conservative approximation to the effective EPE instead of
applying a life-time simulation method, for netting sets with an accom-
panying margin agreement. The effective EPE for a margined counter-
party can be calculated as the sum of the threshold plus an add-on that
reflects the potential increase in exposure over the margin period of risk,
or effective EPE without a margin, whichever is lesser. Under the internal
model method, Basel II requirements state that a measure that is more
conservative than effective EPE may be used in place of alpha times effec-
tive EPE. There is a great deal more that can be said specifically related
to margined counterparties. The reader is referred to other chapters of
this book for details.
Backtesting Methodology
To use IMM for CCR, a financial institution is required to provide evi-
dence to supervisory authorities that the IMM framework is conceptually
sound and is implemented with integrity. Regulators specify a number
of qualitative criteria that the financial institution must meet. One of the
qualitative criteria for CCR exposure model framework is backtesting.
The Basel regulatory capital framework requires that IMM banks back-
test their expected positive exposure (EPE) models, where backtesting
is defined as the quantitative comparison of the IMM model’s forecasts
IMM APPROACH FOR MANAGING COUNTERPARTY CREDIT RISK 67
A Case Study
In this section, a simple interest swap is used to illustrate the IMM meth-
odologies discussed above. We consider simple 5y interest rate swaps in a
single currency. Thus, the only market factor to be simulated is the interest
rates. We simulate interest rates by using common simulation models of
the term structure of interest rates. Fig. 5 illustrates the simulated interest
rates in the future times:
With simulated future market factor values (in this case, interest rates
only), we can evaluate the values of the swap at future time grids, as shown
in Fig. 6. These are the mark-to-market values of the swap using simulated
market factor values.
70 D. ZHUANG
0.2
Simulated market factor values
0.1
0.0
−0.1
−0.2
−0.3
−0.4
0 1 2 3 4 5
Time in years
2.5
2.0
1.5
1.0
NPV
0.5
0.0
−0.5
−1.0
−1.5
0 1 2 3 4 5
Time in years
3.0
2.5
2.0
Exposure
1.5
1.0
0.5
0.0
−0.5
0 1 2 3 4 5
Time in years
Discussion
To successfully obtain the regulatory approval for IMM framework of
counterparty credit risk management, a financial institution must dem-
onstrate the conceptual soundness of its modeling framework, the accu-
racy of the model calculation, and the stability and the robustness of
the model performance, especially under stress scenarios. Abundant
72 D. ZHUANG
0.8
Expected Exposure
0.6
0.4
0.2
0.0
0 1 2 3 4 5 6
Time in years
3.0
2.5
2.0
Exposures
1.5
1.0
0.5
0.0
0 1 2 3 4 5 6
Time in years
Conclusions
The desires for implementing Basel III compliant IMM framework
are strong among financial institutions of sufficiently large size. In
this chapter, we described an IMM approach for managing counter-
party credit risk. The methodologies outlined here illustrate a plausi-
ble approach and have been actually implemented successfully in some
major US commercial banks. It is our hope that the presentations of the
framework and the illustrated example provide a bird’s-eye view of the
approach and serve the readers as a reference for their quest of imple-
menting IMM models.
74 D. ZHUANG
Note
1. See, for example, Jon Gregory, Counterparty Credit Risk and Credit Value
Adjustment, Wiley Finance, October, 2012.
References
1. Basel Committee, “The non-internal model method for capitalising counter-
party credit risk exposures”, Consultative Document, Basel Committee on
Banking Supervision, September 27, 2013.
2. Basel Committee, “Sound practices for backtesting counterparty credit risk
models”, Basel Committee on Banking Supervision, December, 2010.
3. Basel Committee, “Basel III: A global regulatory framework for more resilient
banks and banking systems”, Basel Committee on Banking Supervision, June,
2011.
4. Basel Committee, “The standardised approach for measuring counterparty
credit risk exposures”, Basel Committee on Banking Supervision, April, 2014.
5. Basel Committee and BIO, “Margin requirements for non-centrally cleared
derivatives”, Basel Committee on Banking Supervision and Board of the
International Organization of Securities Commissions, September, 2013.
XVA in the Wake of the Financial Crisis
John Carpenter
Introduction
Since the 2008 financial crisis, it has become clear that a number of
different constraints in practice increase the cost of over-the-counter
(OTC) derivative market making in ways that are not captured in tradi-
tional pricing models. Some of these economics are due to new market
forces (e.g. the cost of long-term debt issued by banks is now materially
higher than LIBOR), some due to regulatory changes (e.g. Basel capital
requirements, mandatory clearing, bilateral initial margin), and some
from accounting changes (e.g. fair value option on selected liabilities).
Attempts to price and risk manage these additional costs has led to a
series of valuation adjustments for counterparty credit (CVA), one’s own
credit (DVA), funding costs (FVA), regulatory capital (KVA), and initial
margin (MVA).
This chapter will introduce each of these adjustments in turn from
a practitioner’s standpoint with a focus on structural origins of the
adjustment as well as practical considerations in implementation and risk
The opinions expressed in this article are the author’s own and do not necessarily
reflect those of Bank of America.
J. Carpenter (*)
Bank of America, 100 North Tryon Street, Charlotte, NC 28255, USA
The rationale for this charge was the experience through the crisis where
many financial institutions reported large quarterly earnings losses due to
increases in the fair value of their CVA (primarily from widening of their
counterparties implied default rates and increased implied volatilities) not
from actual counterparty defaults. Since the standard is established, it is
now necessary to think about pricing and hedging two distinct CVAs: the
cost of hedging the expected actual default risk and the cost of holding
capital against the VaR of the CVA charge itself.
It might seem natural to simply make SCSA ubiquitous and eliminate CVA
altogether, but a CSA in which a bank’s client has to post is not a natural
thing for an end user of a derivative to want. For example, if a treasurer
from a non-financial corporation takes a loan at a floating spread to finance
a project with an expected fixed rate of return, they might likely swap the
loan to fixed. They now have a desired fixed rate liability financing a fixed
rate of return project with the initial proceeds from the loan deployed into
capex. If the MTM of the derivative changes in the interim, it creates a
cash management complication and operational burden for the end user as
they have to fund their collateral. They may, however, want to limit their
exposure to bank credit and have a unilateral CSA where they are solely
a secured party and never a pledgor. Ultimately, a balance must be struck
that suits the needs of both the end user and the market maker’s credit
risk appetite. CSAs are renegotiated infrequently due to the legal burden
from both counterparties in executing one and once a trade is done, the
terms are bound by the CSA at the time of execution and not affected by
any subsequent changes to the CSA terms (unless explicitly migrated). As
a result, even if there is a change in philosophy on what terms constitute a
desirable CSA, there is “legacy issue” both from longstanding CSAs under
which new trades will be booked, and from longer dated derivatives still
on the books which were initiated under older (or without) CSAs.
The terms of CSAs facing customers vary broadly in practice and may
contain any combination of the following items, all of which will have an
economic value which could impact various XVA adjustments.
rights, it is “dead money” which may offset credit exposure but has
no funding benefit.
• Initial Amount (IA) —aka “initial margin” excess collateral above
the MTM to create additional buffer.
• Downgrade Effects (Material Change) —Stricter terms may be
enforced (e.g. requiring IA) if a ratings downgrade past a certain
level occurs (e.g. investment grade).
• Mutual Termination Break Clauses—Provision whereby either
counterparty can unwind the trade at current MTM at a set period
in the future.
• Acceptable Collateral and Cheapest to Deliver (CTD) —CSAs
may vary substantially in the types of collateral which is acceptable
and can vary from USD only cash to currency substitution to gov-
ernment securities to non-investment grade securities. High quality
liquid asset (HQLA) status is an important consideration.
• Cross Default/Acceleration—Does a default on a derivative pay-
ment trigger an event of default on the hedge instrument (especially
the reference obligation on the credit default swap (CDS))?
• Third Party Custodian—Must the secured party hold the collateral
with a third party?
CVA as Value
If the terms of a CSA allow an open exposure to exist, then a CVA is the
present value of the expected loss on the MTM of a derivative prior to matu-
rity due to counterparty default. If hedge instruments (i.e. CDS on the
counterparty) exist and can be traded, it is the same as the present value of
the expected negative carry associated with hedging the default risk over
the life of the trade. Unlike a pure asset such as a customer loan which
could be valued or discounted off of a “shifted” curve, a derivative has the
potential to be either an asset or liability over the course of its life.
Broadly speaking, there are three approaches to pricing CVA:
(ii) Unilateral EPE methods in which the risky counterparty has an option
to default on a positive MTM conditional on an event of default.
(iii) Bilateral EPE approaches that contemplate a default of either
counterparty in the same framework. These are much more
complicated theoretically because they have dependence on the
order of default and have more complicated hedge implications as
it will have deltas to one’s own default probability (and differ from
regulatory CVA approaches).
t =0
CVA = (1 − Recovery ) ∫ EPE ( t ) * PD ( t ,t + dt ) dt
t =T
$250,000,000 $30,000,000
$25,000,000
$200,000,000
$150,000,000
X’PE
$15,000,000
$100,000,000
XCCY receive EUR(LHS axis)
$10,000,000
$50,000,000
$5,000,000
$- $-
1y 2y 3y 4y 5y 6y 7y 8y 9y 10y
Fig. 1 EPE profile for EUR-USD Cross-currency swap and USD interest rate
swap
82 J. CARPENTER
Calibrated default Cross currency (rec EUR) Interest rate (rec fixed)
probabilities from CDS (40%
recovery)
EPE
Tenor CDS Survival PD Forward EPE Forward
(bps) prob [t,t+1]
One common approach is for the CVA desk to interact with the other
market making desks by putting a contingent “make-whole” guarantee on
the MTM of the derivative to the underlying desk. The CVA desk charges
a fee to the market making desk at inception/unwind (which would be
passed through to the customer price) and assumes the counterparty risk.
The market making desk manages to standard CSA, and in the event of
default, the CPM desk would make them whole for the forfeiture of the
MTM.
One issue is how to allocate the individual CVA charges to unrelated
desks as the diversification impacts will come into play; and thus, the CVA
desks hedging cost will be less than the sum of the individual cost to the
LOBs. A natural corollary to this question is whether to price to the end
user incrementally or standalone. While the value to the firm may be closer
to the incremental one, it would be “off-market” in the sense that another
market participant would price as standalone.
spread volatility, not even a 100% correlation between the two would be suf-
ficient in a diffusion model to capture the real risk. In an event of default of
the sovereign, there is certain to be a large gap devaluation of the currency.
The basics of the Basel III advanced method correspond to two ten-day
99% VaRs over two different one-year periods, one of which is the current
one-year, and the second over a stress period. The VaR can be reduced by
hedging on credit (e.g. CDS) but not market risk hedges. Thus a com-
bined portfolio VaR of exposure + credit hedges is calculated. The charge
is then determined to be 3 *(VaR1 + VaR2) where the two VaR correspond
to those calibrated over the two periods (current and stress), respectively.
The volatility in the VaR charges is driven by volatility of spreads, not the
underlying. The rationale for this methodology has been criticized due
to its deviation from value based methods. There are some subtleties in
interpretation between US and European regulators on exemptions for
certain counterparties (sovereigns) from the charges.
There are two remaining issues. (1) What is the optimal set of hedges to
reduce this capital charge, and (2) what is the present value of the cost of
capital associated taking on an incremental CVA VaR? The first issue is prob-
lematic because the optimal hedge from a capital charge perspective will not
align with an optimal hedge from a valuation perspective. This leaves a trade-
off between having open market risk (i.e. hedging the capital charge) or
reducing the market risk but drawing more capital requirements. Of course,
any open market risk which creates actual losses will flow to retained earn-
ings and thus capital. It is also unclear whether the Volcker rule might pre-
clude hedging of the capital charge. The second point brings up difficulties
such as the cost of capital and determining forward capital which is related
to some issues in the KVA (capital valuation adjustment) section below.
After the FVO election was made, it garnered much interest as banks
posted extremely large DVA gains in 2008–2009 (and again in 2011–2012)
due to their widening credit spreads which were then reversed when the
credit market settled.
DVA and Capital
While DVA is reported in earnings, it is excluded from regulatory capi-
tal. The rationale is that a firm’s capital should not be bolstered due to
the mark-down of its own liabilities due to deterioration in the market
perception of the firm’s credit worthiness. In January 2016, FASB issued
an accounting standards update which allowed for DVA from debt under
FVO to be treated as “other comprehensive income” instead of ”net
income until realized”. Derivative DVA will remain in net income. Both
types of DVA will continue to be excluded from capital until realized.
Hedging DVA
One of the theoretical objections to DVA is, “If it has value, then how do
you monetize that value?” Many researchers have suggested solutions to
this which include the relative value to debt versus equity holders of the
consolidated firm. Others suggest approaches involving dynamically trad-
ing in one’s own debt.
There have reportedly been instances where firms have traded in CDS
on a basket of similar names as a proxy for hedging their own DVA. This
brings up a number of other risks (e.g. jump to default) and given the
regulatory angst about exposure from one G-SIFI to another is unlikely to
be a viable strategy going forward. Monetizing or hedging DVA is indeed
extremely difficult. There are a number of research papers which suggest
issuing or buying back debt dynamically as a hedge to DVA. There are a
number of constraints which make this difficult in practice.
Cash equal to MTM (earning OIS) OIS ± Y (true cost of funds) cash
Some academics (e.g. Hull and White 2012) think that FVA does not
exist and should not be taken into account for pricing or valuation. While
accounting standards are not prescriptive, almost all major banks now
report FVA and have taken relatively large one-time write-downs when
the adjustment was first introduced. It continues to be hotly debated in
academia and the industry.
In the above example for instance, an academic argument is that the
funding cost reflected in the FTP rate is not a cost to the consolidated firm
because it reflects the value of the option that it has to default on its debt.
Paradox 1 The law of one price does not hold. Two different banks will
bid differently on the same uncollateralized derivative depending on their
cost of funds if they apply FVA. This violates the corporate finance prin-
ciple that funding is separate from valuation. Valuation should reflect the
price it would clear in the market. Under a theoretical FVA framework
then there is not a true price.
Paradox 3 How could a bank ever make a loan to a client with better
credit than itself? If FVA is applied, then the value proposition would be
negative.
On the contrary:
Paradox 5 On the other hand, how should a banks vanilla debt be car-
ried? If no FVA is applied, then a straight DVA term would incorrectly
account for the bond–CDS basis (i.e. the liability would be held on the
books at a higher price than where it was issued or trades in the market).
XVA IN THE WAKE OF THE FINANCIAL CRISIS 93
• Since there is not one central hub (as with a CCP), it is natural that
there would accumulate long/short exposures with different coun-
terparties that would net to zero. It also applies to financial partici-
pants with material swap exposure and thus brings some customer
trades into scope.
• It is quantitatively conservative (ten-day 99th percentile).
• Both counterparties must post for the same trade without
rehypothecation.
• Portfolio netting is partially restricted (FX rates, commodities, and
equities have separate netting sets for IA purposes even if they are in
the same master ISDA netting set for set-off purpose in bankruptcy).
Physically settled FX trades are likely exempt.
The type of eligible collateral for initial margin for both CCP and bilat-
eral is generally broader than variation including liquid securities whereas
typical variation margin is cash only. While it may be tempting to conclude
this is essentially costless if such pledgable securities already exist on a bank
balance sheet, it is a detriment to funding because the securities may no
longer be counted as liquidity in the form of unencumbered HQLA, nor
could they be used to generate actual liquidity via repo or outright sale. As
such, any use of securities for IA would represent an incremental cost due
to the replacement value of that liquidity.
LCH–CME Basis
While most MVA analysis is speculative until the rules come on board, one
interesting phenomenon in the CCP space that demonstrates the market
impact of MVA is the basis between swaps cleared on the LCH versus the
CME. A structural imbalance has led to a large gap (up to 3–5bps) for one
CCP versus another.
• The cash flows on the two trades are identical (except amount posted
in IA to CCP)
• The total IA posted is risk based and increases as net open risk to
exchange grows
• Relative value clients (such as payers of swaps versus cleared treasury
futures) generally prefer the CME to the LCH due to the ability for
the client to net the future versus the cleared swap.
• As a result, when there is large paying interest from clients to deal-
ers who want to clear through CME, dealers are reluctant to receive
fixed versus CME knowing they must hedge with another dealer on
the LCH and end up posting IA to both exchanges.
2.23% 2.20%
Client Dealer 1 Dealer 2
After the give-up to CCP, Dealer 1 posts more IA to CME and LCH.
Client Dealer 1 Dealer 2
CME LCH
Fig. 4 Dealer 1's received fixed trade flows after (mandatory) clearing
Example:
Time 0: Bank A faces an exempt customer. Bank A hedges with bank B in
interdealer market. Interbank hedge generates IM of $Y.
XVA IN THE WAKE OF THE FINANCIAL CRISIS 99
Time 1: Customer unwinds with bank A. If bank A exits the market risk
versus bank B, then total IM reduces from $Y to zero. If bank B reduces
risk with bank C, then bank A now has total IM of $2Y because it is post-
ing IM to both banks B and C (despite having no market risk).
A potentially even better alternative would be to offset the original
customer facing trade with another exempt customer thereby avoiding an
interdealer IM generating hedge.
Conclusion
Most classical financial theory text flu which could enter into leveraged
self-financing arbitrage strategies to enforce pricing. Resources such as
funding, capital, balance sheet, and default risk have now introduced
material frictions which provide challenges for not only pre-trade pricing
and risk management for derivative market makers but also on supply/
demand factors of the underlying markets themselves. This space will con-
tinue to evolve as the rewards for understanding and optimizing these
scarce resources will be great.
100 J. CARPENTER
References
1. Basel Committee on Banking Committee, “Margin requirement for non-
centrally cleared derivatives”, September 2013.
2. Basel Committee on Banking Committee, “Basel III: A global regulatory
framework for more resilient banks and banking systems”, December 2010 (rev
June 2011).
3. Basel Committee on Banking Committee, “Basel III counterparty credit risk
and exposure to central counterparties – Frequently asked questions”, December
2012.
4. Basel Committee on Banking Committee, “Review of the credit valuation
adjustment risk framework”, July 2015.
5. Hull, J., and A. White. “Is FVA a cost for derivatives desk”, Risk, (2012).
6. Financial Accounting Standards Board. Statement of Financial Accounting
Standards No. 157. Fair Value Measurements.
Liquidity Risk, Operational Risk and
Fair Lending Risk
Liquidity Risk
Larry Li
INTRODUCTION
Ever since the 2008 financial crisis, liquidity risk has been one of the great-
est concerns in the financial industry, both from individual firms’ points of
view and from within the evolving regulatory landscapes. Financial insti-
tutions have been evaluated not only to pass “Not Fail” standards but to
prove “Steadfast”, in other words, having the financial strength, liquidity,
viable earnings, and so forth, to weather any storm that may come across
the horizon. Any systemic risk needs to be controlled and managed by not
only individual firms but by the financial industry as a whole. Therefore,
liquidity risk has to be understood from both the macro and micro level.
On a macro level, the Basel Committee on Banking Supervision
(BCBS), in January 2013,1 states the following as the drawback of the
liquidity issue:
The views expressed in this document are the author’s and do not
necessarily reflect his current and previous employers’ opinions or
recommendations.
L. Li ()
JP Morgan, 245 Park Avenue, New York 10167, NY, USA
e-mail: [email protected]
The crisis drove home the importance of liquidity to the proper functioning
of financial markets and the banking sector. Prior to the crisis, asset mar-
kets were buoyant and funding was readily available at low cost. The rapid
reversal of market conditions illustrated how quickly liquidity can evaporate
and that illiquidity can last for an extended period of time. The banking
system came under severe stress, which necessitated central bank action to
support both the functioning of money markets and, in some cases, indi-
vidual institutions.
MOTIVATION
A financial institution’s approach to risk management covers a broad
spectrum of risk areas, such as credit, market, liquidity, model, structural
interest rate, principal, country, operational, fiduciary, and reputation risk.
After the 2008 financial crisis, liquidity risk has firmly established itself as
an important risk category largely due to the fact that many crises were
attributable to liquid risk not managed well under stress. As a result, there
is a liquidity risk component in many of the regulatory requirements since
2008.2 Consequently, the financial industry has been reshaped and is still
adapting to better address the liquidity risk concerns in both normal and
stressed scenarios.
The liquidity risk concerns can be seen from the following three aspects.
(a) Improved risk management: benefit from risk offsetting flow com-
ing to different desks at different times.
(b) Improve execution cost: making accessible all the available liquidity
to internal/external clients will result in minimizing and thus
improving their execution costs.
(c) Increase trading efficiency: a systematic approach to inventory
management will result in more focus on client needs and reduce
the loss ratio.
(d) Improve execution quality: trading can be centralized in the hands
of fewer, better informed traders.
(e) Improve risk prices offered to clients.
In all these aspects of the current stage, the potential challenges for
liquidity risk management are listed below:
FRAMEWORK AND METHODOLOGY
This section is intended to shed light on the overall framework and
methodology on liquidity risk management as well as to provide further
discussions on various relevant areas such as stress framework, liquidity
coverage ratio (LCR), wholesale deposit, operational excess methodol-
ogy, and liquidity management for commercial banks. It will become obvi-
ous that since there are so many areas where liquidity risk/consideration
fits in, there is really no one universal solution to all liquidity questions.
Therefore, liquidity risk management is as much an overarching frame-
work under which many solutions are needed as processes that are imbed-
ded in the various relevant areas, interconnected for the most part but
distinct nevertheless.
The following subsections cover the relevant six areas.
Fig. 1 Overlap in
Firmwide Coverage
4. The ratio of LAB over net liquidity outflows is used to go against the
current limit level for 90 day stress for a financial institution set at
certain level (such as 100%).
(ii) AUC-based:
Managing
receivables Capital
Strategic
Managing advice
liquidity
DISCUSSIONS
One important point to illustrate in this section is that liquidity risk manage-
ment is closely connected with other aspects of risk management. In what
follows I make use of some examples to demonstrate this point by connect-
ing liquidity risk management to model risk management, in particular.
First of all, there are a number of models in the official model inven-
tory of a financial institution that may be used for modeling liquidity and
assessing liquidity risk. As a result, the model risk management policies,
procedures, processes, and governance would be directly relevant for
liquidity risk management through these models and vice versa.
Secondly, because of all the focuses on liquidity and stress testing and
subsequent regulatory requirements, in the past few years model risk man-
agement standards have been increased so that we see more and more
institutions do stress testing as part of the model development testing and
model validation testing on their models used at least for valuation and
risk management purposes. Those stress tests are designed to make sure
on a micro level that individual models work well under stress scenarios
and in liquidity crises. A case in point is the copulas model originally pro-
posed by David Li, which many have blamed to have contributed to the
2008 financial crisis, as the model’s correlation assumptions broke down
during the crisis that led to valuation failures for many of the credit deriva-
tives positions. The current model risk management practice would do a
thorough stress testing on the correlation assumptions, how the model
behaves under market and liquidity stress scenarios, and how the risk can
be mitigated via limit controls and valuation reserves, and so forth.
The above connection between liquidity risk management and model
risk management can also be said to be between liquidity risk management
and other aspects of risk management, such as market risk management
and credit risk management.
CURRENT DEVELOPMENT
Liquidity risk management has become more and more important as
regulations are tightened around liquidity management as a result of
2008 financial crisis. This is also becoming more evident as we see more
LIQUIDITY RISK 117
people are hired and deployed6 dedicated to building out the liquidity
risk management infrastructure across various financial institutions in the
industry. Nowadays, liquidity risk management has commonly become a
distinct section, together with enterprise-wide risk management, credit
risk management, market risk management, country risk management,
model risk management, principal risk management, operational risk man-
agement, legal risk management and compliance risk management, fidu-
ciary risk management, reputation risk management, capital management,
and so on, under the management’s discussion and analysis in some major
financial institutions’ annual reports.
CONCLUSIONS
To conclude, we come back to those challenges for liquidity risk manage-
ment listed in the early sections and offer some brief final thoughts:
How to adapt to ever-changing regulatory environments with all the
interconnections and overlapping requirements?
A financial institution should set up a firm-wide resource/expertise
center on all relevant regulations to provide regulation support to various
operations of the institution (with an SME (subject matter expert) on each
relevant regulation).
How to come up with firm-wide consistent approach to address so
many liquidity components and challenges effectively?
A financial institution should set up a firm-wide liquidity risk oversight/
committee/framework to manage this firm-wide consistent approach/
effort.
How to manage the resources efficiently to not only address any issues
related to liquidity risk as they arise, but to proactively address root causes
of the issue to avoid further emerging issues?
A financial institution should set up a closely working three-lines of
defense system to identify issues, resolve issues, validate issues, and track
issues in the firm-wide system, with business being the first line of defense,
independent risk management being the second line of defense, and inter-
nal auditing being the third line of defense.
How to ensure the sustainability of the issue resolutions and the related
BAU processes?
A financial institution should not only identify, resolve, validate, and
track issues related to liquidity risk, but address the sustainability of the
issue resolutions throughout the process.
118 L. LI
NOTES
1. Basel III: The Liquidity Coverage Ratio and liquidity risk monitoring tools,
Bank For International Settlements, January 2013. (Public Website Link:
http://www.bis.org/bcbs/publ/bcbs238.pdf).
2. See, for instance, Basel III: International framework for liquidity risk mea-
surement, standards and monitoring, December 2010. (Public Website
Link: http://www.bis.org/bcbs/publ/bcbs188.pdf); Basel III: The
Liquidity Coverage Ratio and liquidity risk monitoring tools, Bank For
International Settlements, January 2013. (Public Website Link: http://
www.bis.org/bcbs/publ/bcbs238.pdf); Implementation of Basel Standards:
A report to G20 Leaders on implementation of the Basel III regulatory
reforms, Bank For International Settlements, November 2014;
Implementation of Basel Standards: A report to G20 Leaders on implemen-
tation of the Basel III regulatory reforms, Bank For International
Settlements, Revision – November 2015. (Public Website Link: http://
www.bis.org/bcbs/publ/d345.pdf).
3. CCAR stands for the comprehensive capital analysis and review, which is the
Federal Reserve's primary supervisory mechanism for assessing the capital
adequacy of large, complex BHCs (bank holding companies).
4. VaR (value-at-risk) is a widely used risk measure of the risk of loss on a spe-
cific portfolio of financial exposures. SVaR (stressed value-at-risk) is a widely
used risk measure of the risk of loss on a specific portfolio of financial expo-
sures under stress.
5. See Basel III: International framework for liquidity risk measurement, stan-
dards and monitoring, December 2010. (Public Website Link: http://
www.bis.org/bcbs/publ/bcbs188.pdf).
6. It is observed that major consultant firms, such as the Big 4 firms, have
extensive practice and services in the area of liquidity risk management, in
conjunction with other areas such as market risk management, credit risk
management, and model risk management.
LIQUIDITY RISK 119
REFERENCES
1. Liquidity Black Holes—Understanding, Quantifying and Managing Financial
Liquidity Risk, Edited by Avinash D. Persaud.
2. Implementation of Basel Standards: A Report to G20 Leaders on
Implementation of the Basel III Regulatory Reforms, Bank for International
Settlements, Revision—November 2015. (Public Website Link: http://www.
bis.org/bcbs/publ/d345.pdf).
3. Implementation of Basel Standards: A Report to G20 Leaders on
Implementation of the Basel III Regulatory Reforms, Bank for International
Settlements, November 2014. (Public Website Link: http://www.bis.org/
bcbs/publ/d299.pdf).
4. Basel III: The Liquidity Coverage Ratio and Liquidity Risk Monitoring Tools,
Bank for International Settlements, January 2013. (Public Website Link:
http://www.bis.org/bcbs/publ/bcbs238.pdf).
5. Basel III: International Framework for Liquidity Risk Measurement, Standards
and Monitoring, December 2010. (Public Website Link: http://www.bis.
org/bcbs/publ/bcbs188.pdf).
6. JPMC-2014-AnnualReport, 2015.
7. JPMC-2013-AnnualReport, 2014.
8. JPMC-2012-AnnualReport, 2013.
9. JPMC-2011-AnnualReport, 2012.
10. Panel Discussion: The Risk of Liquidity Black Holes, Axioma Quant Forum,
2015.
11. 2010 Flash Crash, Wikipedia.
Operational Risk Management
Todd Pleune
INTRODUCTION
Operational risk is simply defined as the risk of loss resulting from inad-
equate or failed processes, people, and systems, or from external events.
This simple definition belies the complexity of OpRisk which includes
most risks that are not caused by credit risk, that is, the risk of default on a
debt, and market risk—the risk that the value of an investment will fall due
to market factors. OpRisk includes legal risk but excludes strategic and
reputational risk. However, most reputational events are actually caused
by operational risk losses.
Examples of operational risk include the following:
The views expressed in this document are the author’s and do not reflect his
current or previous employers’ opinion or recommendations.
T. Pleune ()
Protiviti, Inc., 101 North Wacker Drive, Suite 1400, Chicago, IL 60606, USA
• internal fraud;
• external fraud;
• employment practices and workplace safety;
• clients, products, and business practices;
• damage to physical assets;
• business disruption and system failures;
• execution, delivery, and process management.
MOTIVATION
The first major framework for operational risk measurement was part of
“Basel II: International Convergence of Capital Measurement and Capital
Standards: a Revised Framework”, issued in June 2004. Basel II [4] pro-
vided three methods for estimating capital to be held for operational risk,
the basic indicator approach (BIA), the standardized approach (SA), and
advanced measurement approaches (AMA). The capital estimation section
below describes these approaches including a detailed review of AMAs
currently used for capital estimation. In 2014, the BCBS [1] proposed an
updated approach to replace the BIA and SA, namely, the revised stan-
dardized approach (RSA).1 On March 4, 2016, the Basel Committee on
Banking Supervision (BCBS) proposed a new Standardized Measurement
Approach (SMA) for operational risk. This method is proposed to replace
the three previous approaches for operational risk capital, the BIA, SA,
and AMA. Due to the timing of this book, the current approaches are
described as well as this new SMA approach.2
Since the 2008 financial crisis there have been significant updates to
the practice of operational risk management. In addition to the signifi-
cant changes coming from the BCBS, heightened standards for large banks
(OCC, September 2, 2014) [10] has raised the expectations for operational
risk management for large banks. Furthermore, the emergence of stress
OPERATIONAL RISK MANAGEMENT 123
FRAMEWORK AND METHODOLOGY
An operational risk framework must cover not only operational risk mea-
surement, but also operational risk management. The following goals
must be considered in an operational risk management framework:
The four data sources required for operational risk management and
measurement are internal loss data (ILD), external loss data (ELD), sce-
nario analysis (SA),3 and business environment and internal control factors
(BEICFs) [4].
Scenario Analysis
Scenario analysis of expert opinions is used to plan for events that have
not occurred and to apply conservatism to operational risk estimation for
capital and stress testing. Data collected via the scenario analysis process
can be used both quantitatively and qualitatively to enhance risk measure-
ment and management. Scenario analysis is typically conducted in work-
shops where risk managers and business unit leaders come together to
evaluate past events at their bank and others to derive reasonable assess-
ments of plausible severe losses, developing estimates for both the likeli-
hood and impact. The process should be tailored to each business unit
and consider the bank’s risk appetite, culture, and risk management frame-
work. Scenario analysis may consider potential losses arising from multiple
simultaneous operational risk loss events.
Scenario analysis is often used to adjust models based on ILD and
ELD. It can also be used directly to estimate potential OpRisk losses when
actual data is unavailable. These approaches lend themselves to bias and
should be conservatively developed and carefully benchmarked.
Benefits
While the AMA is slated for replacement, the models developed for Basel
II compliance will likely continue to be used for operational risk manage-
ment. This is because methods used to track and measure operational risk
allow banks to report on their operational risk relative to their risk appe-
tite. The LDA approaches combine internal loss event data with external
loss event data, scenario analysis, and business environment and internal
controls information to understand and explain how the overall opera-
tional risk at the bank changes over time. The SMA is expected to be a
more direct measure that will be more useful for capital estimation but
will be supported by other information for operational risks management.
Challenges
While the AMA is founded in highly developed techniques for estimating
value at risk (VaR), and the LDA commonly used is a mature technique
for estimating loss that has been successfully deployed in insurance for
many years [11], this approach is does not always lend itself to robust esti-
mation of Operational Risk losses (See Embrechts et al). One significant
limitation to the robustness is the expectation that the capital estimate
uses the 99.9% VaR. It is difficult for any model to be stable at this high
confidence level, but given the data scarcity and uncertainty in operational
risk, the 99.9% level is especially variable. Also, while frequency estimation
has coalesced around the use of the Poisson distribution and the similar
128 T. PLEUNE
0.16
0.14
Expected loss
0.12
Unexpected Loss
0.1
0.08
99.9% VaR Quantile
0.06
Tail Loss
0.04
0.02
0
- 100,000,000 200,000,000 300,000,000 400,000,000
expected loss tends to fall beyond the most likely loss. The unexpected loss
is the difference between the expected loss and some high VaR quantile—
in this case 99.9%. Beyond the 99.9% VaR quantile is the tail loss.
case. A 95% or 98% loss can represent a severe adverse loss scenario. These
confidence levels do not have the same degree of variation as the 99.9
percentile used for capital estimation; however, there are other challenges
with this approach. A key one is that the percentiles must be selected
using expert judgment, which can limit the robustness of this modeling
approach. Another challenge is that the LDA model may be optimized to
estimate tail losses, especially the 99.9% confidence level. Lower quantiles
may be less accurate due to truncated loss data collection and fitting tech-
niques that focus on the tail.
A hybrid technique uses both the LDA approach and regression
approaches. Here the frequency is regressed against macroeconomic vari-
ables, so that the mean frequency varies with the macroeconomic environ-
ment, but instead of using a simple average for the severity of the losses,
the frequency and severity distributions are combined using macroeco-
nomic analysis. Typically, the frequency does not change enough for a
robust stress test with this type of model. In this case, increasing percen-
tiles may still be used similar to the LDA-based approach.
A key difference between the two main approaches outlined for OpRisk
stress testing is that in the regression approach the macroeconomic variables
are explicitly linked via regression analysis. In the LDA-based approach,
the impact of changes in the economy must be reflected in the selected
percentiles. While the variation in the Monte Carlo analysis includes most
potential future states of the economy, the selection of which state applies
is subjective in the LSA-based approach. A challenge for both approaches
is that they are difficult to implement without significant loss data. The
regression approach requires a long time series of losses, preferably across
difference states of the economy. The LDA-based approach requires sig-
nificant data to estimate severity and frequency separately.
CONCLUSIONS
This chapter has described the latest methods for operational risk manage-
ment and measurement. As an operational risk practitioner or a senior risk
manager with operational risk in your sphere of influence, what activities
should you already be undertaking for robust operational risk manage-
ment? Here are two crucial components in operational risk management.
1. Loss tracking: it is critical that operational risk losses be tracked.
While the largest banks have been tracking operational risk loss
events for many years to support operational risk modeling for the
AMA for Basel II, banks of all sizes should endeavor to distinguish
operational risk losses from other debits in the general ledger [8]. A
loss event database will enable enhanced operational risk manage-
ment, by allowing the impact of risk control processes to be moni-
tored over time and also allowing for enhanced modeling for stress
testing and other risk measurement activities.
2. Scenario Analysis: scenario analysis is one of the best ways to convert
business leaders’ understanding of operational risk into useful met-
rics that can enhance stress testing and capital management activities
and help find areas where enhancing risk management activities for
operational risk will be most valuable [5].
NOTES
1. See BCBS, “Operational risk—Revisions to the simpler approaches—con-
sultative document”, October 2014.
2. See BCBS, “Standardised Measurement Approach for operational risk”,
March 2016.
3. Because scenario analysis and standardized approach have the same abbre-
viation we will avoid using either away from their definition.
134 T. PLEUNE
REFERENCES
1. BCBS, “Operational Risk—Revisions to the Simpler Approaches—Consultative
Document”, October 2014.
2. BCBS, “Standardised Measurement Approach for operational risk”, March
2016.
3. Basel Committee on Banking Supervision, “Principles for the Sound
Management of Operational Risk”, June 2011.
4. The Basel Accord, “International Convergence of Capital Measurement and
Capital Standards: A Revised Framework”, Updated November 2005.
5. AMA Group Document, “The Risk Management Association 1 December
2011, Industry Position Paper, I”. Scenario Analysis—Perspectives & Principles.
6. Embrechts, P., Furrer, H., Kaufmann, R. Quantifying Regulatory Capital for
Operational Risk, Derivatives Use, Trading & Regulation 9(3), 217–233.
7. Michael Power, The Invention of Operational Risk, Discussion Paper 16, June
2003.
8. Department of the Treasury Office of the Comptroller of the Currency 12 CFR
Parts 30 and 170 [Docket ID OCC-2014-001] RIN 1557-AD78 OCC
Guidelines Establishing Heightened Standards for Certain Large Insured
National Banks, Insured Federal Savings Associations, and Insured Federal
Branches; Integration of Regulations, September 2, 2014.
9. Philip E. Heckman, Glenn G. Meyers, The Calculation of Aggregate Loss
Distributions from Claim Severity and Claim Count Distributions. The exhibits
associated with the paper “The Calculation of Aggregate LOSS Distributions
from Claim Severity and Claim Count Distributions” by Philip E. Heckman and
Glenn G. Meyers (PCAS LXX, 1983) appear in the subsequent volume of the
Proceedings (PCAS LXXI, 1984) (Proceedings: 1983 Volume LXX, Number
133 & 134).
Fair Lending Monitoring Models
Maia Berkane
Introduction
Consumer lending groups at commercial banks or other financial
institutions, as their name indicates, are in charge of providing credit to
consumers for the purpose of acquiring residential mortgages, credit cards,
auto loans, student loans, and small business loans, among many other
products. The underwriting and structuring of these loans are heavily
dependent on the consumer credit worthiness, the type and value of the
collateral, and the competitiveness of the market for the products. Loan
officers, in general, have guides and policies they need to follow when
reviewing applications for credit. Based on the applicant credit attributes
and the collateral characteristics, the underwriting policies will dictate if the
loan should be approved and the pricing policies will dictate how the loan
should be priced. Whether the process is completely automatic or partly
discretionary, there may be opportunities for fair lending risk to occur and
the institutions need to have controls in place to monitor compliance with
the fair lending rules and take corrective actions in case of breech.
The views expressed in this document are the author’s and do not reflect Wells
Fargo’s opinion or recommendations.
M. Berkane (*)
Wells Fargo & Co., South College Street, Charlotte, NC 28202, USA
e-mail: [email protected]
Motivation
Fair lending risk manifests itself in ways described as:
• Disparate Impact: the Office of the Comptroller of the Currency
(OCC)’s Fair Lending Handbook states,
Disparate impact has been referred to more commonly by the OCC as “dis-
proportionate adverse impact”. It is also referred to as the “effects test”.
• Disparate Treatment: the OCC’s Fair Lending Handbook states,
Fair lending risk monitoring involves gathering all the information and
transactions data after the fact and performing detective, not predictive
modeling. Each period involves a different set of data, different set of human
behaviors, market behaviors and decisions. Monitoring the compliance of
a given group with the ECOA and FHA rules can be a real challenge from
a quantitative perspective, especially when discretion is allowed in decision
and pricing. The Consumer Financial Protection Bureau (CFPB) conduct
targeted Equal Credit Opportunity Act (ECOA) reviews at institutions in
order to identify and evaluate areas of heightened fair lending risk. These
reviews generally focus on a specific line of business, such as mortgage,
credit cards, or automobile finance. They typically include a statistical anal-
ysis and, in some cases, a loan file review that assesses an institution’s com-
pliance with ECOA and its implementing regulation, Regulation B, within
the selected business line. CFPB does not disclose the statistical model used
and only gives some guidelines on how to conduct the fair lending analy-
sis. Institutions are left guessing and trying to replicate the CFPB model.
The most common “guess” models are the multivariate linear regression
for pricing and the multivariate logistic regression for u nderwriting. We
discuss the danger of using these models in the fair lending monitoring
context and provide a more appropriate alternative.
FAIR LENDING MONITORING MODELS 137
Framework and Methodology
Fair lending risk arises when a prohibited basis is harmed by policies or
treatment of the lending institution. The definition of a prohibited basis
under ECOA is any of
1. Race or color;
2. Religion;
3. National origin;
4. Sex;
5. Marital status;
6. Age;
7. The applicant’s receipt of income derived from any public assistance;
8. The applicant’s exercise, in good faith, of any right under the
Consumer Credit Protection Act.
1. Race or color
2. National origin
3. Religion
4. Sex
5. Familial status (defined as children under the age of eighteen living
with a parent or legal custodian, pregnant women, and people secur-
ing custody of children under eighteen)
6. Handicap
When it comes to race and gender, there are two major situations to
consider:
1. Race and gender of the applicant are recorded, such as for Home
Mortgage Disclosure Act (HMDA) data. In that case, race and
gender classification variables can be included in the model. The
common practice is to use a factor for the race and another for
gender and specify the reference as white for race and male for
gender. Specification of race and gender for joint applications can
be confusing and the reference class has to be defined accord-
ingly. For instance, mortgage applications are joint applications
in most cases and for homogeneous race couples, it is simple to
define the race of the joint application, but for mixed race couples,
138 M. BERKANE
Traditional Model
Suppose Y is the outcome of interest, X is the set of credit and collateral
attributes of the loan. Let T be an indicator variable equal to 1 if the appli-
cant is from a protected class and 0 if from a base class. This d
ichotomous
FAIR LENDING MONITORING MODELS 139
treatment forms two populations P1 and P2. In P1 and P2, the joint dis-
tribution of X and Y may differ from P1 to P2. The conditional expecta-
tion of Y given a value of X is called the response surface of Y at X = x,
which we denote by Ri(x) , i = 1 , 2. The difference in response surfaces
at X=x,
τ = R1 ( x ) − R2 ( x ) (1)
ATT is the average treatment effect for the treated. The main prob-
lem is that for each unit i, we only observe either Yi(1) or Yi(0) but not
both. The idea is, given Yi(1), to derive and estimate Yi(0), say Y i ( 0 )
FAIR LENDING MONITORING MODELS 141
and then obtain an estimate of ATT. This is associated with the causal
analysis (Rubin 2005) as it aims at answering the question: “What kind of
outcome would we have observed had the treated been from the control
group?” In the fair lending framework, despite the fact that protected
class (treated) is not something we can manipulate, for each loan decision
involving a protected class applicant, we can imagine the counter-factual
loan decision involving a base class (control) applicant with exactly the
same characteristics.
Let
ATT = E ( t ( X ) ) . (4)
This is the average over t(x). Because X can be very granular, the above
calculation of ATT becomes impossible in practice. Rosembaum (1983)
proposed the propensity score, which is the probability of assignment to
treatment, given the covariates:
p ( x ) = P (T = 1 | X = x ) . (5)
1 i =1 TiYi (1 − Ti ) Yi
ATT = ∑ − . (6)
N N p ( xi ) 1 − p ( xi )
142 M. BERKANE
Model Theory
In the fair lending framework, the treatment group is the protected class
and the control group is the base class—we will be using these terms inter-
changeably. Propensity score methods consist of matching observations
that are the same for all the covariates except for the treatment indicator.
The propensity score is a weight w(x) of the distribution f(x) of the con-
trol group so that it becomes identical to the distribution of the treatment
group, in other words,
f ( protected | x )
w( x ) = K , (8)
1 − f ( protected | x )
ATT = − . (9)
∑iN=1 Ti ∑iN=1 (1 − Ti ) w ( xi )
FAIR LENDING MONITORING MODELS 143
We can estimate Yi(0) from the units of the control group with the
c losest propensity score to Yi(1), however, because these two scores are not
identical and because an estimate of the propensity score, rather than the
true one, is used, there may be remaining imbalance in the confounders.
Rosenbaum (1984) showed that stratifying on the propensity score
removes most of the remaining bias in the confounders. Practitioners have
been using 5–10 strata, based on the distribution of the propensity score
and the size of the data. Observations within strata are then weighted by
the number treated within strata over the total number treated in the data.
Assumptions
Two strong assumptions are the basis for causal inference in observational
studies. Given the outcome Y, the vector of covariates X and the treat-
ment T, we need:
Eb ( x ) [( E (Y1 | b( X ) , T = 1) − ( E (Y0 | b( X ) , T = 0 )
. (10)
(
= Eb ( X ) [ E (Y1 | b( X ) ) − ( E (Y0 | b( X ) ) ) = E (Y1 − Y0 )
Model Testing
In order to judge the performance of the model, we designed a Monte
Carlo simulation where we set the treatment effect and then estimate it
according to the methods described in this document.
Y = 2T + 2 X1 + 2 X 2 + exp X3 + ∈
Sensitivity Analysis
In randomized experiments, bias due to missing covariate is very small
since randomization minimizes this effect by assuring that data points
are exchangeable with respect to the treatment assignment, because the
probability of treatment is the same for treatment and control groups.
In observational studies, this bias may be substantial and can change the
result of the analysis. Sensitivity analysis measures how robust the findings
are to hidden bias due to unobserved confounders. A sensitivity analysis
asks: how would inferences about treatment effects be altered by hidden
biases of various magnitudes? Rosenbaum (2002), developed sensitivity
146 M. BERKANE
200
150
count
100
50
0
−2 0 2 4 −2 0 2 4 −2 0 2 4
TreatEffect
Table 3 Sensitivity
Gamma Lower Upper
Analysis bound bound
1 0 0
1.5 0 0
2 0 0
2.5 0 0
3 0 0
3.5 0 0
4 0 0
4.5 0 0
5 0 0.0003
5.5 0 0.002
6 0 0.0083
6.5 0 0.0253
analysis tests for matched data that rely on some parameter Γ measuring
the degree of departure from random assignment of treatment. In a ran-
domized experiment, randomization of the treatment ensures that Γ = 1.
Assume that the allocation to treatment probability is given by
exp ( β xi + γ ui )
exp ( β x j + γ u j )
{ }
= exp γ ( ui − u j ) . (13)
1 Pi (1 − Pj )
≤ ≤ exp ( γ ) . (14)
exp ( γ ) Pj (1 − Pi )
148 M. BERKANE
p + S ( S + 1)
( )
E V+ =
2
(15)
and
( )
p + 1 − p + S ( S + 1) ( S + 2 S )
( )=
var V +
6
, (16)
+
where p+ = Γ1 + Γ′ p may be interpreted as the probability of being in
the treatment group. When it is equal to 0.5, we have random allocation,
when it is larger than 0.5, there is a higher allocation to treatment than
1
control. For p − = we obtain the same form for the expectation and
Γ +1
variance with p replaced by p−. Note that the Wilcoxon statistic is the
+
sum of the rank of pairs with treatment larger (in absolute value) than the
control. Under randomization, we would expect this sum to be (1/2)
(S(S+1)/2 (half the sum of the ranks from 1 to S, which is the standard
formula for the sum of the first S numbers).
We now use this technique to test the sensitivity to unobserved covari-
ate of our previously stated results.
FAIR LENDING MONITORING MODELS 149
Conclusion
I present an overview of the complications in modeling fair lending data,
due to biases arising from observational studies. I present efficient meth-
ods to remove some of the biases and test for the effect of bias due to miss-
ing important covariates. A simple Monte Carlo simulation is performed
to show the performance of three matching methods. These methods have
been used extensively in epidemiology, criminology, political science, and
many other areas where observational study data is analyzed. As shown
in this chapter, these methods are also useful in standard fair lending
modeling.
References
1. Fair Lending Comptroller’s Handbook, January 2010, p. 8: http://www.occ.
gov/publications/publications-by-type/comptrollers-handbook/Fair%20
Lending%20Handbook.pdf
2. M.N. Eliott, A. Fremont, P.A. Morrison, P. Pantoja, and N. Lurie, A new
method for estimating race/ethnicity and associated disparities where adminis-
trative records lack self-reported race/ethnicity. Health Services Research, ,
43:1722–1736, 2008, Wiley Online Library.
3. Paul R. Rosenbaum and Donald B. Rubin. The central role of the propensity
score in observational studies for causal effects. Biometriks, 70:41–55, 1983. 6, 8.
4. Paul R. Rosenbaum and Donald B. Rubin. Reducing bias in observational stud-
ies using sub-classification on the propensity score. Journal of the American
Statistical Association, 79(387):516–524, 1984. 7, 9.
150 M. BERKANE
5. Donald B. Rubin. Estimating causal effects from large data sets using propensity
scores. Annals of Internal Medicine, 127(8 Part 2):757–763, 1997. 9, 63.
6. Donald B. Rubin. Causal inference using potential outcomes. Journal of the
American Statistical Association, 100(469), 2005. 6
Model Risk Management
Caveat Numerus: How Business Leaders
Can Make Quantitative Models More Useful
Jeffrey R. Gerlach and James B. Oldroyd
The Problem
Complex quantitative models are increasingly used to help businesses
answer a wide array of questions. However, technically sophisticated,
mathematically driven models all too often overwhelm leaders’ abilities
to understand and manage models. A critical problem is the bifurcation
of business experience and quantitative modeling skills: Leaders with vast
industry experience and wisdom often do not understand the models
while quantitative modelers often have too little industry experience to
know where the models might fail. As a result of the separation of wis-
dom and analytics, poor quantitative models are frequently adopted and
disastrous consequences follow.
The views expressed in this presentation are solely those of the writers. They do
not necessarily reflect the views of the Federal Reserve Bank of Richmond or the
Federal Reserve System.
The most direct solution is to hire people with both skills. Unfortunately,
there are relatively few individuals who have the industry experience and
soft skills required to be a successful manager, and the mathematical skills
necessary to be a quant. In consequence, organizations need to develop
the capability to link experience with quantitative models, but often they
are ill-equipped to deal with this pressing need. The typical leader does
not understand the mathematics, assumptions, and jargon used by quants,
and fails to critically analyze how models work. And the developers of
complex, quantitative models have incentives NOT to make their work
transparent as their pay, status, and jobs are dependent on their ability to
develop and control the models they create.
For example, one of the most influential and controversial models in
recent years is David X. Li’s model of credit default correlation,1 which
quants used routinely to price securities that were at the heart of the finan-
cial crisis.2 The model makes several crucial assumptions that differ consid-
erably from the real world of financial markets. However, it is sufficiently
complex that most of Wall Street could not understand it well enough to
assess its strengths and weaknesses. In the model, one of the key equations
for calculating credit default probabilities is:
Pr [TA < 1,TB < 1] = 2 ( ( F (1)) , ( F (1)) ,γ ) .
−1
A
−1
B
The Solution
A key task of business leaders is to bridge the gap between the quants
and those with the hands-on experience necessary to assess the useful-
ness of quantitative models. An all-too-frequent strategy of ignoring
quantitative models will not bode well for current and future business
leaders. Quantitative models are here to stay and are often becoming a
CAVEAT NUMERUS: HOW BUSINESS LEADERS CAN MAKE... 155
requirement for business. For instance, the Federal Reserve’s stress testing
program for large banks requires the banks to use quantitative models to
analyze their financial condition under several macroeconomic scenarios.3
Our solution to organizations’ pressing need to both create and
understand quantitative models is not to make all decision-makers expert
modelers. Instead, we recommend that leaders learn a technique that will
allow them to determine how models work, assess their strengths and
weaknesses, and put themselves in a position to determine if the models
are useful. To help leaders in developing these skills, we provide a frame-
work of five simple questions that allow leaders to not only understand
but also be able to question and improve the quantitative analytics within
their firms. In short, our goal is to help “normal” leaders who do not have
PhDs in technical fields to relate their experience to the empirical wizardry
that often surrounds them. The five questions are not designed to help
leaders develop an understanding of the technical details of any specific
model, but rather understand and critically evaluate the output from their
firm’s models. Although models are sometimes extraordinarily complex,
the core way in which they function is most often relatively simple.
The model we analyze in the first case study predicted minimal risk to the
US government in guaranteeing the debt of two government-sponsored
enterprises (GSEs), Fannie Mae and Freddie Mac.4 When those two insti-
tutions collapsed in 2008, the US government bailed them out, with
CAVEAT NUMERUS: HOW BUSINESS LEADERS CAN MAKE... 157
taxpayers eventually on the hook for an estimated $380 billion. The five
questions illustrate why the model underestimated the risk to the US gov-
ernment of providing guarantees to the GSEs. More generally, they show
an experienced manager, with little technical expertise but armed with the
right questions, could have known why the model would fail.
In 2002, Fannie Mae released a research paper analyzing the chances it
and fellow GSE Freddie Mac would go bankrupt.5 The lead author of the
report was Columbia University economics professor Joseph Stiglitz, the
previous year’s recipient of the Nobel Prize in Economic Sciences, a former
chief economist of the World Bank, and former chair of President Clinton’s
Council of Economic Advisors. His two coauthors were up-and-coming
economists, Peter Orszag, who later became the director of the Office of
Management and Budget during the Obama Administration and is now
vice chair of Global Banking at Citigroup, and his brother Jonathan Orszag,
a senior fellow in economic policy at the Center for American Progress.
Their model indicated the chances of Fannie and Freddie defaulting
were “extremely small”, substantially less than one in 500,000. Given that
low probability of default, the authors estimated the expected cost to the
government of the GSEs going bankrupt at $2 million. They concluded,
“on the basis of historical experience, the risk to the government from a
potential default on GSE debt is effectively zero.”6
Six and a half years after Fannie Mae released the report, the Federal
Housing Finance Agency (FHFA) placed Fannie Mae and Freddie Mac
into conservatorship. Through the end of 2010, the US government
had paid $154 billion to cover losses at the two GSEs. According to the
Congressional Budget Office, losses through 2019 are estimated to total
$380 billion.7 The model calculated the expected cost of bankruptcy of
the GSEs at $2 million when the actual cost is likely to be $380 billion.
How is such a huge error possible? Our five principles demonstrate prob-
lems with the model that any industry leader could have seen and provides
clear direction on how they could have improved the model.8
Question 1:What are the key variables in the model?
The GSE model seeks to determine the expected cost of bankruptcy. In
the model there are four variables: interest rates, historical credit loss rates,
the probability of GSE default, and the cost of bankruptcy. After discuss-
ing the key variables with quantitative experts, a leader with experience in
the industry would have noticed that the model omits a key change in the
industry in the 2000s: the increasing risk of mortgage loan portfolios. The
historical credit loss rates were based on loans made when the industry had
158 J.R. GERLACH AND J.B. OLDROYD
shock that lasted for ten years. As long the GSEs held the required amount
of capital, “the probability that the GSEs become insolvent must be less
than the probability that a shock occurs as severe (or more severe) as the
one embodied in the stress test”.9 The stress tests in their model were
designed to replicate conditions whose chance of occurring was less than
one in 500,000. Thus, if the government guaranteed $1 trillion worth
of GSE debt, and the GSEs had sufficient capital to survive any condi-
tions except an economic catastrophe so rare its probability of occurring
was 0.000002, the expected cost of bankruptcy was $2 million. In other
words, the model assumes the GSEs had enough capital and that the prob-
ability of their bankruptcy was almost zero.
The GSE Model makes three main assumptions to estimate the cost of
GSE bankruptcy:
of failure. In the case of the GSEs, the decision to take on more risk by
lowering lending standards simply cannot be accounted for in the three
basic assumptions.
Question 4: What is the main conclusion of the model?
The model’s conclusion was that the GSEs would require government
assistance only in a severe housing market downturn, which would likely
occur only in the presence of a substantial economic shock.11 They mod-
eled this as a one-in-500,000 chance of happening.12 For an experienced
manager assessing the validity of this model, that should be an obvious
red flag. Is it possible that any business, no matter how successful or well-
run, could survive 3,000,000 possible future scenarios? From 3,000,000
simulations of future economic conditions, there is not one that would
cause even an Apple or Samsung or Coca-Cola to go bankrupt? In 2001,
the year before the report was released, the GSEs had $1.44 trillion in
assets, $1.34 trillion in debt, and $45 billion in core capital, giving them
a debt-to-equity ratio of about 30. Does it really make sense that a model
produces 3,000,000 simulations of future economic conditions in which a
highly leveraged financial company never goes bankrupt?
Even if one accepted the assertion that a financial institution with over
$1 trillion in assets and a 30-1 leverage ratio had virtually no interest rate
or credit risk, there are other ways to lose money. Most notably, one would
expect substantial operational risk on a portfolio so large. An industry expert
would surely have known that the GSEs faced operational risk, and that in a
bankruptcy scenario those losses could be large. After identifying this omis-
sion, a leader could ask the quants to include this source of risk in the model.
Question 5: How long should the model work?
To estimate the probability of an economic shock severe enough to
cause the GSEs to default on their debt, the GSE model used data on
interest rates and credit loss rates.13 The authors took historical inter-
est rate and credit loss rate data and used them to generate hypothetical
future conditions. The interest rate data start in 1958 and the GSE model
used a “bootstrap” method to simulate future scenarios. Although the
exact method is complicated, involving a vector autoregressive moving
average model, the basic approach is to simulate the future conditions
by randomly selecting actual interest rates from the historical data. That
means the simulated future interest rates reflect the actual interest rates
over the four decades before the research paper was published.
The annual credit loss data were only available starting in 1983 so the
authors use a model of credit loss rates based on interest rate changes.
CAVEAT NUMERUS: HOW BUSINESS LEADERS CAN MAKE... 161
Specifically, credit loss rates depend on the previous credit loss rate and
the previous interest rate level—and this gets to the crux of why the model
failed so spectacularly.
The model failed to predict the likelihood or cost of the bankruptcy of
the GSEs, and the main cause of that model failure was that the model
developers did not recognize profound changes in the industry that would
drastically increase credit losses within a few years of the publication of
their paper. An industry insider, on the other hand, would have known
the GSEs, like many other financial institutions, were taking on substan-
tially more risk in their loan portfolios. This means that the historical
period upon which the model is based is not representative of the period
in which the model was used. As the industry changed the model needed
to account for the change.
In fact, by 2002 the process of lowering the GSEs’ lending standards was
well under way. From 1990 to 2000, the GSEs’ total assets increased from
$174 billion to $1.1 trillion, which included sharp increases in purchases of
private-label mortgage securities, among them subprime and Alt-A securities.
The end result of the deterioration of lending standards is that by 2007
credit losses were increasing dramatically. At Fannie Mae, the delinquency
rate for single-family loans14 rose from 0.65% in 2006 to 0.98% in 2007, and
continued rising to 2.42% in 2008, 5.38% in 2009, and 4.48% in 2010.
The graph below shows the national delinquency rate for all banks, not just
the GSEs. The rate began rising from under 2% in 2006, eventually peaking
at 11%. The graph shows the rate on ten-year US government bonds during
the same period. Note that interest rates generally fall while the delinquency
rate rises after 2006, something the GSE model does not allow because it
assumes the only cause of higher credit losses is higher interest rates.
Credit losses account for most of the losses suffered by the GSEs. For
Fannie Mae, credit-related expenses rose from $94 million in 2000 to
$5 billion in 2005 and to $73.5 billion in 2009. The only way the GSE
model could account for rising credit losses is through rising interest
rates so there is no way that model could accurately forecast the actual
increases in credit losses.
The key point is that a leader with experience in the industry, asking
key questions to a quant who understood the modeling techniques, could
have detected the flaws in the model. For the non-quant leader, the tech-
niques used in the paper such as “vector autoregressive moving average”
might be intimidating. The five questions, however, can guide the leader
to understand the model and its limitations better (Fig. 1).
162 J.R. GERLACH AND J.B. OLDROYD
Fig. 1 This figure shows the delinquency rate on single-family mortgages and
the 10-year Treasury constant maturity rate. Note that the delinquency rate
increased sharply during the financial crisis of 2008 even as the Treasury rate con-
tinued to decrease, a pattern not consistent with the assumptions of the GRE
model
first quarter of 2012, the notional size of the SCP increased from $51
billion to $157 billion. The portfolio generated losses of $100 million in
January 2012, $69 million in February, and $550 million in March. On
March 23, 2012, the head of the CIO instructed the traders to stop trad-
ing. By the end of 2012, losses totaled at least $6.2 billion.
There are many lessons from the London Whale and they do not reflect
well on the bank’s leadership or its regulators. This section focuses on one of
the conclusions from the March 15, 2013 US Senate report on the London
Whale trades: “Risk evaluation models were manipulated to downplay risk.”15
To estimate the value of its derivatives positions, the CIO originally
used a straightforward method of marking-to-market using the midpoint
of the bid-ask spread. In the first quarter of 2012, the CIO started to
assign more favorable prices within the bid-ask spread. A junior trader in
CIO prepared a spreadsheet showing that reported losses of $161 million
through March 16, 2012 would be $593 million if the securities were
valued using the midpoint method. JPM’s Investment Bank, which had
a portfolio with some of the same derivatives held by the SCP, continued
to use the midpoint method. That meant different units of JPM assigned
different values to the exact same positions.
On May 10, 2012, the bank’s controller issued a memo assessing the
derivatives valuation. The controller concluded the CIO properly reported
a loss of $719 million, instead of the $1.2 billion implied by the midpoint
method. Thus, the internal review accepted a valuation method almost
certainly designed to hide losses. In July 2012, the bank restated its earn-
ings to report additional SCP losses of $660 million.
The CIO had several models for evaluating risk in its positions, includ-
ing a value-at-risk (VaR) model and a comprehensive-risk-measure (CRM)
model. The 95% VaR model estimated expected tail loss over a one-day
horizon at the 95% confidence level. In other words, the bank would
expect to exceed the losses predicted by the VaR on five days out of 100.
The 10-Q VaR model estimated potential daily losses by looking at the
previous 264 trading days and taking the average loss of the worst 33 days.
On January 16, 2012, the CIO exceeded its VaR limit and the exceed-
ance continued for four days. As noted above, the most straightforward
method to reduce risk in a portfolio is to close some risky positions. Rather
than reducing risk in the portfolio, the CIO met its VaR limit by changing
the VaR model so that the model generated lower estimates of risk. On
January 31, the CIO implemented a new model that reduced its VaR risk
from $132 million to $66 million.
164 J.R. GERLACH AND J.B. OLDROYD
The CRM model estimated losses over a one-year period during mar-
ket stress. It aimed to show how much a portfolio can lose in a worst-
case scenario over one year at the 99% confidence level. In January 2012,
the CIO’s CRM loss estimate increased from $1.97 billion on January 4
to $2.34 billion on January 11 and to $3.15 billion on January 18. By
March, the CRM loss estimate was over $6 billion.
One CIO manager dismissed the CRM numbers, calling them “garbage”,
which reflected the view that the numbers were far too high. Note, though
that the $6 billion loss estimate generated by the CRM was very close to the
actual losses on the portfolio during 2012. The bank responded to the CRM
loss estimates by ignoring them until realized losses were already high.
Both the VaR and CRM models gave clear indications that the SCP was
a highly risky portfolio, and both models provided early warnings of the
losses that were to come in 2012. In the case of the VaR, the bank chose to
change the model to generate lower loss estimates so that it would remain
below JPM’s risk limits. And the bank chose to ignore the CRM model,
which turned out to be remarkably accurate in estimating actual losses.
This is a simple example of an important point: Sometimes models
work well. The models produced output that was potentially very valu-
able, but the CIO did not take advantage of the models. One of the main
lessons is to look carefully at changes to models, especially ones that result
in substantially smaller losses. When a model is changed, those changes
should be based on a priori theoretical reasons designed to improve its
quality, and not simply to change the output to a number that manage-
ment finds more palatable. The primary driver of the VaR model changes
was apparently to decrease substantially the estimated losses produced by
the model. Another key lesson is to pay attention to model results, espe-
cially when they produce high loss estimates that are not welcome to the
line of business. The bank ignored the CRM model output, which pre-
dicted losses similar to the realized losses on the SCP portfolio in 2012.
the paper was published. The change in lending standards was well-known
and someone with industry experience would have surely been aware of it.
In the London Whale example, the managers apparently pre-determined
the number they wanted from the VaR model and the quants produced that
number. Rather than improve the model by adding their business expertise,
management seems to have focused on getting the number they sought.
2. The future might not be the same as the past: The GSE model used
a historical relationship between interest rates and credit loss rates to simu-
late future credit loss rates. However, most of the data came from a historical
period in which lending standards were higher. When the GSEs and other
financial institutions began to take on riskier loans, the relationship between
interest rates and credit loss rates changed. Credit loss rates began to increase
in 2005—when interest rates and unemployment were relatively low, and
the economy was growing—and reached levels that were unprecedented in
the US in modern times. The GSE model assumed the past relationship
of interest rates and credit losses would continue into the future. With the
change in the lending standards, that assumption was not valid.
In the London Whale case study, the SCP had produced substantial
gains in the past. Even when their risk management models began to show
increasing levels of risk, the CIO ignored that information. They appar-
ently expected conditions to change so that the portfolio would increase
in value as it had in the past.
3. Balance your own judgment and the model’s conclusion: Veterans
of the banking industry have personally experienced numerous banking crises
over the last few decades.16 When a model’s conclusion is counter to their
intuition—for example, when a model indicates that a financial institution
with a trillion-dollar balance sheet and a debt-to-equity ratio of 30 has virtually
no chance of going bankrupt, even during a severe recession—leaders should
be skeptical. Knowledge of the industry leads to the obvious conclusion that
there are some conditions under which any business, much less highly lever-
aged financial institutions, would go bankrupt. Although leaders' judgment
may not always be right, it should lead to higher scrutiny of the model.
In the London Whale example, the models produced loss estimates that
were potentially very useful, but management did not act on them. In this
case, the models should have informed the leadership’s judgment so that
they fully appreciated the risk of the SCP portfolio.
4. Caveat Numerus—Beware of the Number. Models often pro-
duce a single number meant to summarize the conclusion. We strongly
recommend that leaders not focus on a single number. In the first case
166 J.R. GERLACH AND J.B. OLDROYD
study, the expected cost to the government of the GSEs defaulting on their
debt was $2 million according to the model. Unfortunately, models should
virtually never produce such a simple conclusion because the number is only
as good as the model’s assumptions, data, and technique. The $2 million
figure, for example, is valid only if lending standards do not worsen substan-
tially, and the relationship between interest rates and credit rates remains
unchanged, and future interest rates are similar to historical rates, and so on.
In the second case study, management wanted the number produced
by the model to be below a pre-determined risk limit. The bank changed
the model so that it would produce a number that would not require it to
reduce the risk in its portfolio by selling risky assets.
Rather than focusing on a single number produced by a model, a more
productive approach is for leaders to understand how a model works and
what its limitations are. We provided five questions that leaders can use to
guide their discussions with quantitative experts. These five questions can
help leaders better understand, question, and use models for decision mak-
ing. Model development and validation should not be viewed as a job best
left to quants as business expertise is necessary to develop and use models
most effectively. Our five questions will encourage quants to explain their
models in plain language that non-quants can understand, while helping
the line-of-business experts understand the models well enough to ensure
that they produce useful estimates.
Notes
1. David X. Li, “On Default Correlation: A Copula Function Approach”,
Journal of Fixed Income, March 2002.
2. For a critical assessment of Li’s model, see Salmon, Felix, “Recipe for Disaster:
The Formula that Killed Wall Street”, Wired, February 2009.
3. See, for example, “Comprehensive Capital Analysis and Review 2015
Summary Instructions and Guidance”, Board of Governors of the Federal
Reserve System, October 2014.
4. Congress established the Federal National Mortgage Association (Fannie
Mae) in 1938 to provide federal money to finance home mortgages and raise
the rate of home ownership. In 1968, Fannie Mae was converted to a private
company and divided into Fannie Mae and the Government National
Mortgage Association (Ginnie Mae). In 1970, Congress established the
Federal Home Loan Mortgage Corporation (Freddie Mac) to compete with
Fannie Mae. Although Fannie Mae and Freddie Mac were technically private
organizations, they were known as government-sponsored enterprises and
CAVEAT NUMERUS: HOW BUSINESS LEADERS CAN MAKE... 167
Dong (Tony) Yang
The views expressed in this document are the author’s and do not reflect his
employer’s opinion or recommendations.
What Is Model?
In the “Supervisory Guidance on Model Risk Management” issued by
the board of governors of the Federal Reserve System (Federal Reserve)
and the Office of the Comptroller of the Currency (OCC) in April,
2011—Federal Reserve Supervisory Bulletin 2011-7 and OCC Bulletin
2011–12 (“SR 11-7/OCC 2011-12”), respectively, a “model” is defined
as “a quantitative method, system, or approach that applies statistical, eco-
nomic, financial, or mathematical theories, techniques, and assumptions
to process input data into quantitative estimates. A model consists of three
components: an information input component, which delivers assump-
tions and data to the model; a processing component, which transforms
inputs into estimates; and a reporting component, which translates the
estimates into useful business information”.1
This helps, to a great extent, to clarify the definition of a “model”.
However, there are still questions wide open. For example, one might
wonder whether an Excel spreadsheet should be treated as a model or not.
In practice, different entities have applied varied answers to this question,
based on different perceptions regarding the concept of models, as well as
idiosyncratic conditions such as model risk appetite, model risk manage-
ment framework, policies and procedures, and available resources.
In addition to calculation tools such as spreadsheets, it can also be chal-
lenging to distinguish models from “processes”. One popular example is
the integration of capital stress testing2—some banks consider this as a
separate model and record it as such on the model inventory, while others
view it as a calculation process instead.
A key factor to understand model risk and implement effective MRM
practice is the recognition of the very diverse natures and characteristics
of different types of models. Some models are built on profound quanti-
tative theories and complex calculation processes (such as term structure
models, complex financial product valuation models, advanced statistical/
econometrical models, etc.). On the other hand, there are also a large
number of models that do not involve intensive quantitative approaches
and processes, but are highly dependent on qualitative inputs even though
the model outputs can still be considered as quantitative in nature. Those
qualitative inputs in the second class of models could be business assump-
tions (e.g. liquidity management and liquidity stress testing models, cer-
tain product pricing models), regulatory guidance, rules and requirements
(e.g. allowance for loan and lease losses(ALLL) models, regulatory capital
MODEL RISK MANAGEMENT UNDER THE CURRENT ENVIRONMENT 171
• coding errors;
• unsuitable implementation software or coding language;
• lack of proper integration and interfacing with other enterprise
governance, risk, and compliance (GRC) systems and platforms.
change management;
Challenges of MRM
As formalization and streamlining of the MRM framework have been an
evolving process in recent years, inevitably, there have been quite some
challenges that market practitioners have had to face.
Here are some of such challenges:
180 D.(T.) YANG
MRM Governance
Some of the key MRM governance components have been discussed in
Sect. “Typical MRM Organizational Structure” and “The Second LOD”
(Fig. 2), including organizational structure, policies and procedures, and
roles and responsibilities. Besides these components, the following are also
crucial in MRM governance.
Model Validation
Within the whole MRM framework, the model risk management group/
department is considered as the heavy lifter, in terms of performing their
full-time responsibilities to ensure that the MRM framework is well estab-
lished, properly executed, and regularly monitored. One of the MRM
group’s most important (and often the most onerous) duties is model
validation, which is the focus of this section.
Per SR 11-7 / OCC 2011-12, an effective validation framework should
include “three core elements:
For proprietary models (i.e. “in-house” models), this step is the bridge
between theoretical design and framework and the execution of model
development. The focus should be on model development process and
practice, especially when the model validation is the initial (usually “full-
scope”) validation before productionization of the model. For exam-
ple, the data component will be primarily referring to the model fitting
MODEL RISK MANAGEMENT UNDER THE CURRENT ENVIRONMENT 189
data, and the tests on data usually focus on the suitability (e.g. statistical
properties, applicability to model purpose and use) of the data used to
develop the model, to answer such questions as:
Model Implementation
Model implementation refers to the process to actually “build” the model.
This process can include the following items: model configuration; cod-
ing/programming and debugging; deployment and production of the
model; installation, customization, and configuration of vendor models;
and various testing activities on the functionality of the model.
Clearly the implementation of the model directly affects and determines
its functionality and suitability, and the defects in the model implementa-
tion process may lead to improper use of the model, even if the model was
designed well. Therefore, it is important to review and ensure that model
MODEL RISK MANAGEMENT UNDER THE CURRENT ENVIRONMENT 191
interest rate term structures, while it was used to price American swaptions
which require stochastic term structures, then this should be identified
and reported as a breach of the model limitations.
From a model validation perspective, it is also important to ensure that
the ongoing M&T is properly designed and executed, by verifying such
components as:
• Last but not least, MRM professionals should be fully aware of the
fact that ongoing M&T varies significantly across different types of
models, and one uniform set of procedures and templates is usually
not implementable for all models. For example, the model owner of
an ALLL model may be completely confused by the ongoing M&T
requirements for the model owner of a capital stress testing model
developed based on time series regression. The model validators
should be adequately flexible to design and execute different testing
procedures on ongoing M&T for various kinds of models.
MODEL RISK MANAGEMENT UNDER THE CURRENT ENVIRONMENT 193
Risk Rating
One of the key outcomes of model validation is the assessment of whether
the model is appropriately built for its intended purpose and use, which, in
other words, means that the model validation must provide a clear assess-
ment of the level and nature of the risks of using the model in production.
Such risk ratings need to include the rating for both the specific findings
and issues identified during the course of model validation, as well as the
assessment of the overall risk associated with the model as a whole.
The MRM policies and procedures should have a clear definition and
guidance of model risk assessment and rating, which most often are driven
by the nature, cause, and impact of the findings on model performance.
For the risk rating of findings/issues, the considerations may include
whether the issue pertains to the fundamental model theory, design, and
other key modeling components such as assumptions, inputs, and process-
ing; whether the issue is the result of significant lack of model controls;
what is the impact of the issue on the overall model performance; and
so on. The risk rating, once determined, should also be the basis of the
resolution plan, including the necessity and time limit for model owner’s
response and remediation.
The overall model risk assessment is the result of aggregating the issue
risks and evaluating the risk of the model as a whole. This provides the
model owner and the management with a comprehensive opinion and
conclusion regarding the risks to use the model for its designated pur-
poses. Such assessment may result in different levels of overall model risk
ratings, such as the following:
Effective Challenge
Another key output and core purpose of model validations is to provide
effective challenges. “Effective challenge” is a broad concept. As defined in
SR 11-7 / OCC 2011-12, it is the “critical analysis by objective, informed
parties who can identify model limitations and assumptions and produce
appropriate changes”.11
Effective challenges are “challenges”, meaning that the activities and
results should be aimed to identify deficiencies and risks in the model, and
to question the appropriateness to use the model for its designated pur-
poses, rather than supporting the current model (or “window-dressing”).
And these challenges need to be “effective”, meaning that any deficiencies
and issues identified during the challenging process should aim to reflect,
in a meaningful way, the true risks associated with the model develop-
ment, model implementation, and model use; it also means that the par-
ties who provided the effective challenges need to have adequate influence
and authority to enforce serious consideration of, and necessary response
to, the challenges raised.
Effective challenges should be the principle in MRM activities con-
ducted by all the three LOD within the MRM frame work—for exam-
ple, the business user should provide adequate effective challenges to the
model developer, as the effective challenges within the first LOD. For
model validators, this certainly is also the key principle to keep in mind.
Effective challenges may be raised on all the key model components,
such as the following:
CONCLUSION
As discussed in this chapter, revolutionarily improved model risk man-
agement practice has been a direct outcome from the lessons learned by
the market practitioners in the recent financial crisis. This mission is com-
plex and long-term in nature, with an onerous process involved to accom-
plish. As mentioned, the effort to build, enhance, and refine the MRM
practice is still underway, with new challenges arising almost on a daily
basis in this ever-evolving economic, business and market environment.
At the end of the day, a sound MRM is expected to play a key role in the
overall risk management, to help prevent similar crunches from occurring
again in the financial services industry.
NOTES
1. SR 11-7/OCC 2011-12, Page 3, Para. 1.
2. The common practice of banks’ capital stress test involves numerous com-
ponent models to cover different stress testing factors (such as credit risk
of different products/portfolios, operation risk, market risk, as well as the
pre-provision net revenue (PPNR) forecasts), and the results from these
component models are then “integrated” to calculate the final stress test
results under different stressed scenarios. Banks often utilize their asset
liability management (ALM) systems to perform such integration, but the
approaches, methodologies and procedures largely vary.
3. SR 11-7/OCC 2011-12, Page 3, Para. 3.
4. SR 11-7/OCC 2011-12, Page 18, Para. 5.
5. CCAR banks refers to the bank holding companies that are subject to the
comprehensive capital analysis and review (CCAR), which is an annual
MODEL RISK MANAGEMENT UNDER THE CURRENT ENVIRONMENT 197
review conducted by the Federal Reserve, and are usually the banks with
greater than $50 billion in consolidated assets; the DFAST banks refers to
the bank holding companies, excluding the CCAR banks, that are subject
to the Dodd-Frank Act stress testing (DFAST), which usually have $10~50
billion in consolidated assets.
6. SR 11-7/OCC 2011-12, Page 19, Para. 2
7. Sarbanes–Oxley Act of 2002, aka the “Public Company Accounting
Reform and Investor Protection Act”.
8. In some entities, the MRM group may also be required to develop, main-
tain, and monitor the inventory of other (“non-model”) tools, such as
user-defined tools (“UDT”) or end-user computing (EUC) tools. In such
cases, there usually should be a separate risk management framework and
approaches on these non-model tools.
9. SR 11-7/OCC 2011-12, Page 11, Para. 1.
10. Theoretically backtesting and out-of-sample testing have different defini-
tions, although they serve similar purposes (testing the model fitness based
on observed data). However, in the industry, out-of-sample tests were
often referred to as “backtests” due to the different understanding and
interpretation of these terms.
11. SR 11-7/OCC 2011-12, Page 4, Para. 4.
CCAR and Stress Testing
Region and Sector Effects in Stress Testing
of Commercial Loan Portfolio
Steven H. Zhu
Introduction
The estimation of future loan losses is not only important for financial
institutions to effectively control the credit risk of a commercial loan port-
folio, but also an essential component in the capital plan submitted for
regulatory approval in the annual Comprehensive Capital Analysis Review
(CCAR) and Dodd-Frank Act Stress Test (DFAST) stress testing.1 Under
the regulatory guidelines, banks must demonstrate in their stress testing
methodology that the risk characteristics of loan portfolio are properly
captured at a granular risk-sensitive level to adequately reflect the region
and sector effects2 when incorporating the impact of macroeconomic sce-
narios. This chapter describes a methodology of estimating the point-in-
time (PIT) default probability that can vary according to macroeconomic
scenarios and at the same time capture the credit risk at the region and
industry sector levels using external rating agency data. The key to this
modeling approach is the maximum likelihood estimation of credit index
and correlation parameters calibrated to the historical default and rating
The view expressed in this paper represents the personal opinion of author and
not those of his current and previous employers.
migration data. The credit index3 in the model represents the “hidden”
risk factor underlying the default while the correlation parameter can be
attributed to the default clustering, and the estimation of credit index and
correlation provides a compelling explanation for the original design of
risk-weight function used in the calculation of credit risk capital charge
in the Pillar 1 of Basel II capital rule. The methodology can be used as a
benchmark model to validate the bank’s internal model, because it can be
built in compliance with the bank’s internal risk rating system and it can
also be implemented with external data from the rating agency such as
S&P and Moody’s, thus making it practical for many institutions with only
limited internal data on default and migration history.
How the probability of default and rating migration responds to
changes in the macroeconomic environment are important for banks to
assess the adequacy of credit risk reserve and capital adequacy under nor-
mal and stressed market conditions. Credit reserve and capital are pri-
mary tools for banks to manage and control the credit risk in their loan
portfolios,
–– credit reserves are designed to cover the expected losses which are
predicted to be experienced in the bank’s loan portfolio over the
normal economic cycle;
–– credit capital is designed to cover the unexpected loss which
only occurs under a downturn economy or in extreme market
conditions.
Banks are required under Basel II rules to develop through-the-cycle
(TTC) probability of default (PD) model for estimating credit reserve and
capital requirement. Banks have used a wide range of methods to estimate
credit losses, depending on the type and size of portfolios and data avail-
ability. These methods can be based on either accounting loss approach (i.e.
charge-off and recovery) or economic loss approach (i.e. expected losses).
Under the expected loss approach, the losses are estimated as a function of
three components: probability of default (PD), loss given default (LGD),
and exposure of default (EAD). In general, banks can apply econometric
models to estimate the losses under a given scenario, where the estimated
PDs are independent variables regressed against the macroeconomic fac-
tors and portfolio or loan characteristics. However, econometric models
are often based on data availability which can be problematic in practice
to PD estimation on a low-default and investment-grade portfolio such as
large commercial and industrial (C&I) loans.4 Hence, banks sought out
REGION AND SECTOR EFFECTS IN STRESS TESTING OF COMMERCIAL LOAN... 203
The calibration of the credit index model is based on the S&P historical
default and transition data covering the 30-year period from 1981 to 2011.
After the construction of the credit index, we perform the statistical regres-
sion analysis to establish the relationship and model the credit index in rela-
tion to the 26 macroeconomic variables provided in 2013 CCAR so that
we can project the future values of credit indices in each of three regions
based on their respective macroeconomic drivers (such as GDP, CPI, and/
or Unemployment), and the projected values of credit index are properly
matched in time step to produce the stressed PDs and transition matrices in
each of two years in the planning horizon under the CCAR macroeconomic
scenarios. Finally, we provide the results of backtest analysis to compare the
modeled PDs with the default experiences over same 30-year historical period.
The credit index model is an adoption of the Credit Metric approach
based on conditional probability. Compared to the unconditional
approach, the conditional approach captures the credit cycle of economy
204 S.H. ZHU
modeled as the systematic risk factor (i.e. credit index) and the correla-
tion of individual obligor’s default with the systematic factor. The higher
correlation implies higher levels of probability of default and more down-
grade transition from the high ratings to the low ratings. The importance
of a conditional approach in modeling the default and transition matrix is
highlighted in the 1999 BCBS publication [1] and recent FRB guideline
[4] with an emphasis on its ability to improve the accuracy of the credit
risk models. Hence, the use of the Credit Metrics approach to model the
credit index as described in this chapter is consistent in a general effort to
better aligning the firm’s CCAR stress testing approach with the firm’s
overall credit stress test framework.
Credit Index
We apply the well-established Credit Metrics approach5 that the rating tran-
sition can be modeled using a continuous latent factor X6 and a set of the
thresholds representing the states of credit quality. For each initial rating
G at the beginning of period, X is partitioned into a set of thresholds or
disjoint bins so that the probability of X falling within the bin [ xgG , xgG+1 ]
equals to the corresponding historical average G-to-g transition probability:
( ) ( )
p ( G,g ) = Φ xgG+1 − Φ xgG .
(1)
Each of initial rating has seven transition probabilities (i.e. the columns
in the transition matrix), and the threshold value is calculated as
xgG = Φ −1 ∑ p ( G, r ) (2)
r<g
Xt = ρ ⋅ Z t + 1 − ρ ⋅ ξt (3)
Φ −1 ( PD1YR ) − ρ Z
PD ( g|Z ) = Φ (4)
1− ρ
206 S.H. ZHU
Lower Higher
xG − ρ Zt xG − ρ Zt
p ( G,|g,|Z t ) = Φ g +1 − Φ g . (5)
1− ρ 1− ρ
This is the model value at the quarter t of the g-rated PD and G-to-g
transition in one year’s time from the quarter t.
Next, we apply two alternative MLE-based methods to calibrate the
quarterly value of credit index from historical default and migration prob-
ability during 1981–2011.
where PDAvg = the historical average probability of default per rating. The
correlation parameter ρ plays an important role in the Basel’s credit risk-
weight function model under Basel II and Basel III [2] as it controls the
proportion of the systematic risk factor Zt affecting the set of loans in the
economy.8
To obtain the industry and region-specific credit index, we repeat the
iteration steps described above and calibrate the index in each historical
quarter based on the cohort pool of defaults (and migration) observed
only within the region or industry to obtain the credit index for a major
industry sector such as financial institutions or a region such as the USA,
Europe, and Asia (developing nations), respectively.
208
#Qtr TIME HORIZON 1 -YEAR TRANSITION MATRIX (BY QUARTER) CREDIT INDEX
S.H. ZHU
Fig. 2 Quarterly iteration of estimating credit index Z from default and transition matrix
REGION AND SECTOR EFFECTS IN STRESS TESTING OF COMMERCIAL LOAN... 209
Under the Credit Metrics approach, the credit index is a measure that
represents how much the transition matrix deviates (in terms of upgrades
and downgrades) from the long-term transition matrix. It can be shown
that the credit index, when calibrated only to the default column of transi-
tion matrix, will correlate better with the historical default rates (i.e. 73%
with BB-rated and 92% with B-rated) and it also coincides with the three
economic downturns in the 30-year history from 1981 to 2012 (Fig. 3).
The credit index admits a very intuitive interpretation as the value of Zt
measures the “credit cycle” in the following sense:
–– The negative values of Zt (<0) indicate the bad year ahead with a
higher than average default rate and a lower than average ratio of
upgrades to downgrades.
–– The positive values of Zt(>0) indicate the good year ahead with a
lower than average default rate and migration to the lower ratings.
Given the value of credit index, we can then apply the formula (5) to
construct a transition matrix. As an example, let us look at two transi-
tion matrices below corresponding to two particular values of the credit
index Z = +1.5 and −1.5, with a fixed Rho-factor (ρ = 10%). We observe
the large changes in the values of cells above diagonals representing the
8 20
7 Credit Index BB-rated B-rated 18
6 16
5 14
4
12
3
10
2
8
1
0 6
-1 4
-2 2
-3 0
1-Jul-97
1-Jul-06
1-Jul-82
1-Jul-88
1-Jul-91
1-Jul-85
1-Jul-94
1-Jul-00
1-Jul-09
1-Jul-03
1-Jan-99
1-Jan-81
1-Jan-84
1-Jan-87
1-Jan-90
1-Jan-93
1-Jan-96
1-Jan-02
1-Jan-05
1-Jan-08
1-Jan-11
Default Correlation
The correlation (ρ) is used to capture how obligor-specific risk changes
in relation to the systematic risk of the credit index as described in previ-
ous section. The credit index was estimated in the previous section by
assuming the correlation is known a priori. In this section, we describe
an approach designed to estimate ρ independently from the credit index,
by constructing a MLE function as the binomial distribution of default
occurrences.
Specifically, we model the default as the Bernoulli event.9 For a given
historical one-year period (indexed quarterly in time t), there are dt,s =
#defaults out of Nt,s = #obligors in a cohort (c). Conditional on Z = Zt, the
probability of default is given by
Φ −1 ( Pg ) − ρ Z t
pc ( g| Z t ;ρ ) = Φ
1− ρ
REGION AND SECTOR EFFECTS IN STRESS TESTING OF COMMERCIAL LOAN... 211
N t ,g Nt ,g − dt ,g
Bc ( Z t ;ρ ) =
d c ( t )
p g|Z t ,g 1 − pc ( g|Z t )
d
(9)
t ,g
Fig. 4 Rho (ρ) and MLE curve as function of (ρ) for selected industry sector
2.5 6
2
4
1.5
1
2
GDP Growth
Credit Index
0.5
0 0
-0.5
-2
-1
-1.5 Credit Index
-4
-2 US GDP(YoY)
-2.5 -6
Q1 1989
Q1 1990
Q1 1992
Q1 1993
Q1 1994
Q1 1995
Q1 1997
Q1 1998
Q1 2000
Q1 2002
Q1 2003
Q1 2005
Q1 2007
Q1 2008
Q1 2010
Q1 1988
Q1 1991
Q1 1996
Q1 1999
Q1 2001
Q1 2004
Q1 2006
Q1 2009
Q1 2011
Fig. 5 Lead-Lag Relationship between credit index and GDP growth
-5
-4
-3
-2
-1
1998Q2 2001Q1
1999Q1 2001Q4
1999Q4 2002Q3
2000Q3 2003Q2
2001Q2 2004Q1
Base
2004Q4
Averse
2002Q1
2005Q3
2002Q4
2006Q2
2003Q3
2007Q1
2004Q2 Severely Adverse
Cindex
2006Q3 2010Q1
2007Q2 2010Q4
Cindex.Fied
2008Q1 2011Q3
US GDP YoY growth (2013 CCAR)
2008Q4 2012Q2
2009Q3 2013Q1
2010Q2 2013Q4
Fig. 6 2013 CCAR scenarios for USA and Europe GDP growth
2011Q1 2014Q3
2011Q4 2015Q2
0
1
2
-2
-1
0.5
1.5
2.5
0
2
4
6
8
-2.5
-1.5
-0.5
-8
-6
-4
-2
-10
216 S.H. ZHU
where EUGDPgrowth is the EU GDP YoY growth rate and the graph
below shows the quarterly projection of credit indices with respect to
CCAR supervisory scenarios based on the regression equation for each
credit index. Due to the data limitation and lack of variable selection, the
regression performance12 of region-specific credit index can achieve an
R-square at 80% in Europe and only about 65% for Asia and developing
nations (Fig. 7).
While the historical pattern of the credit index depicts the past credit
conditions in the history of the economic cycle, the levels of stress in
CCAR scenarios are adequately captured and reflected in the projection of
the credit index as translated by the statistical regression model.
0
1
3
0
1
2
3
-4
-3
-2
-5
-1
-5
-3
-2
-1
-4
1998Q1 1998Q1
1998Q4 1998Q4
1999Q3 1999Q3
2000Q2 2000Q2
2001Q1 2001Q1
2001Q4 2001Q4
2002Q3 2002Q3
2003Q2 2003Q2
2004Q1 2004Q1
2004Q4 2004Q4
2005Q3 2005Q3
2006Q2 2006Q2
2007Q1 2007Q1
2007Q4 2007Q4
EU-Adv
NA-Adv
2008Q3 2008Q3
EU-Base
EU-SAdv
NA-Base
NA-SAdv
2009Q2 2009Q2
2010Q1 2010Q1
Credit Index -Europe
2010Q4 2010Q4
2011Q3 2011Q3
2012Q2 2012Q2
2013Q1 2013Q1
Credit Index -North America
2013Q4 2013Q4
REGION AND SECTOR EFFECTS IN STRESS TESTING OF COMMERCIAL LOAN...
2014Q3 2014Q3
2015Q2 2015Q2
217
218 S.H. ZHU
−1 _ _
Φ PD r − ρr Z r
pD ( g|Z r ) = Φ . (13)
1 − ρr
−1 _ _
Φ PD s − ρ s Z s
pD ( g|Z s ) = Φ (14)
1 − ρs
where Zr = the credit index for region, Zs = the credit index for sector and
ρr = correlation estimated for each region as described in Sect. 5 (Fig. 8).
The yearly transition rate represents the probability of rating migration
from initial G-rating at the beginning of the year to g-rating at end of the
same year. Since the time step of credit index is shifted forward by two
quarters (in order to match the time step of GDP and CPI time series), we
need to calculate the stress one-year transition matrix based on the stress
value of the credit index at the third quarter of each planning year in first
year (2013) and second year (2014).
The following Table 3 exhibits the yearly stress transition matrix under
2013 CCAR as a severely adverse scenario for 2 regions (USA and Europe)
as well as for selected industry sectors (such as energy, financial institu-
tions, and healthcare).
For the planning horizon over two consecutive years under CCAR stress testing,
banks can calculate the two-year transition matrix in order to validate the losses
projected over the planning horizon and the two-year transition matrix can be
computed as the product13 of two one-period transition matrices:
Energy
CAR 2013: Severely Adverse
80%
Financial
Inst itut ion
60% Healthcare
40% Insurance
20% Materials
Real Estate
0%
BBB BB B CCC
Fig. 8 Stress PDs by region and sector across the rating grades
220 S.H. ZHU
Table 3 Stress transition matrix by region projected for Year 1 and Year 2
Rating AAA AA A (%) BBB (%) BB (%) B (%) CCC D (%)
(%) (%) (%)
5
1990-1991
CCAR 1Y Stress PDs IG Non-Fin Downturn
4 2000-2001
Downturn
3 2008-2009
Credit Crisis
CCAR Stress
2
North America
CCAR Stress
1 Europe
CCAR Stress
Asia Pacif ic
0
75
1990-1991
CCAR 1Y Stress PDs NIG Non-Fin Downturn
60
2000-2001
Downturn
45 2008-2009
Credit Crisis
CCAR Stress
30
North America
CCAR Stress
15 Europe
CCAR Stress
0 Asia Pacif ic
Fig. 9 Historical downturn PDs compare with CCAR one year stress PDs
REGION AND SECTOR EFFECTS IN STRESS TESTING OF COMMERCIAL LOAN... 223
7
CCAR 2Y Stress PDs IG Non-Fin 1990-1991
6 Downturn
2000-2001
5 Downturn
2008-2009
4 Credit Crisis
3 CCAR Stress
North America
2 CCAR Stress
Europe
1
CCAR Stress
Asia Pacific
0
90
CCAR 2Y Stress PDs NIG Non-Fin 1990-1991
Downturn
75
2000-2001
Downturn
60
2008-2009
Credit Crisis
45
CCAR Stress
30 North America
CCAR Stress
15 Europe
CCAR Stress
0 Asia Pacific
Fig. 10 Historical downturn PDs compare with CCAR two year stress PDs
224 S.H. ZHU
where PDr(g, s) = rated PD mapped to the region (g) and industry sec-
tor (s). For a given stress scenario, the loan loss is calculated in both the
first year and second year by applying the stress PDr(g, s) to aggregated
exposure per rating of all loans in the portfolio segmented according to
the region and industry sector.
For illustration purposes, we consider the following example of loan
loss calculation over a two-year planning horizon for a portfolio currently
measured at $25 billion (Fig. 11):
The loan loss calculation during the first year in above matrix shows the
exposure (EAD) has changed as a result of rating migration at the end of
the first year, which resulted in a default of 2,309 mm and then at end of
the second year equal to cumulative default of 907 mm. Assuming a con-
stant LGD = 50%, we obtain a loss rate = 6.4% calculated by 50% × (2,309
+ 907)/25,000 in this example, which is on par to 2012 CCAR median
of loan loss rates between Fed estimates and the bank’s own estimates, as
reported below (Fig. 12):
The bank’s own estimates (red) showed a greater range of variation rela-
tive to the Fed estimates (blue). BHC’s estimates (red) were uniformly
lower than the Fed estimates (blue). In particular, we noted that the Fed’s
projected loss rate of 49.8% for GS was being cut off and not fully displayed
REGION AND SECTOR EFFECTS IN STRESS TESTING OF COMMERCIAL LOAN... 225
Fig. 11 Loan loss calculations for first year and second year
226 S.H. ZHU
BHC Estimate
10%
8% Median=6.5%
6%
4%
2%
0%
Ally
Amek
BAC
BONY
BBT
CapOne
Citi
FTB
GS
JPM
KeyCorp
MS
PNC
Regions
SST
SunTrust
USB
WFC
Fig. 12 2012 CCAR loan loss across 18 banks
Concluding Remarks
The implementation of the Basel II A-IRB method requires the estima-
tions of probability of default (PD) and rating migration under hypo-
thetical or historically observed stress scenarios. Typically, the bank can
first perform the forecast of selected macroeconomic variables under the
prescribed scenarios and then estimates the corresponding stressed PD
and migration rates. These stressed parameters are in turn used in estimat-
ing the credit loss and capital requirement within the capital adequacy
assessment framework. In this chapter, we have demonstrated a practical
methodology to incorporate the effect of region and industry segmen-
tation in the estimation of the stressed PD and rating migration under
the economic shocks such as the macroeconomic scenarios prescribed in
REGION AND SECTOR EFFECTS IN STRESS TESTING OF COMMERCIAL LOAN... 227
CCAR stress testing. The main advantage of this approach is the ability
to incorporate the future view of macroeconomic conditions on the credit
index, such as the scenarios prescribed in the Fed annual CCAR stress test.
By modeling the effect of the credit index on the probability of default
and migration, we can synthesize the future credit conditions under vari-
ous macroeconomic scenarios and perform the stress testing to assess the
sensitivity of the wholesale loan portfolios with respect to specific regions
and industry sectors under the macroeconomic scenarios.
One of main objectives in regulatory stress testing is to ensure that
financial institutions have sufficient capital to withstand future economic
shocks. The economic shocks are designed and prescribed by regulators
in the form of macroeconomic scenarios on the selected set of key eco-
nomic variables such as GDP, unemployment, and housing prices. The
financial institutions are required to conduct the stress testing across all
lines of businesses covering credit risk, market risk, and operational risk;
and to submit their capital plans for regulatory approval. The capital plan
must include estimates of projected revenues, expenses, losses, reserves,
and the proforma capital levels14 over the two-year planning horizon
under expected conditions and a range of stressed scenarios. Under the
guideline [4] set out by regulators, the financial institutions must overhaul
the estimation methodologies for losses, revenues, and expenses used in
the capital planning process15 and make the enhancement toward a more
dynamically driven process by explicitly incorporating the impact of mac-
roeconomic shocks.
Notes
1. Capital plan submitted for CCAR stress testing [3] includes the estimates
of projected revenues, expenses, losses, reserves, and proforma capital lev-
els over the planning horizon under a range of stress scenarios.
2. Regional segmentation is explicitly highlighted in CCAR, in which Fed
specified the scenario highlighting the possibility of Asia slowdown.
3. Similar to Moody’s credit cycle approach, the credit index represents the
systematic risk factor in the Merton model of default risk and it is well-
suited for estimation of low-default portfolio (LDP) such as commercial
and industry loans.
4. Observed defaults are rare historically for the bank’s C&I loan portfolio.
5. See, for instance, Greg Gupton, Chris Finger and Mickey Bhatia: Credit
Metrics – Technical Document, New York, Morgan Guaranty Trust Co.,
228 S.H. ZHU
1997; Belkin, Barry, and Lawrence R. Forest, Jr., “The Effect of Systematic
Credit Risk on Loan Portfolio Value at Risk and on Loan Pricing”, Credit
Metrics Monitor, First Quarter 1998; and Lawrence Forest, Barry Belkin
and Stephan Suchower: A one-parameter Representation of Credit Risk and
Transition Matrices, Credit Metrics Monitor, Third Quarter 1998.
6. It assumes that X has a standard normal distribution and Φ(x) is the cumu-
lative standard normal distribution.
7. Historical one-year transition matrix is calculated from the cohort pool of
default and rating migration, based on S&P CreditPro database.
8. The ρ-factor should be estimated in theory for the actual portfolio being
analyzed, if we have accumulated sufficient history of the loan defaults and
credit migration in the portfolio. Since very few defaults and migration
occurred in the case of wholesale portfolio, one can use the S&P historical
default/migration data as the proxy for the purpose of modeling stress
PDs and transitions.
9. See Paul Demey, Jean-Frédéric Jouanin, Céline Roget, and Thierry
Roncalli, “Maximum likelihood estimate of default correlations”, RISK
Magazine, November 2004.
10. http://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature
11. The length of quarterly data series can be evaluated to obtain the overall
satisfactory statistical goodness-of-fit in the regression analysis.
12. The lower R-square in the regression implies a loss of total variance
between the fitted data and original data series. In this case, a technique
known as the error-correction can be evaluated to achieve a higher
R-square and reduce the loss of variance.
13. This is valid only if we assume the rating transition follows a Markov chain.
Alternatively, one can model the two-year transition matrix directly using
the “credit index” extracted from the historical two-year transition matrix.
14. The proforma capital levels include any minimum regulatory capital ratios,
Tier-1 common ratio, and other additional capital measures deemed rele-
vant for the institution. The capital ratios are estimated using the RWA
projected over the planning horizon.
15. Traditionally, the capital planning is accounting driven with a static
projection.
References
1. Basel Committee on Banking Supervision, Credit risk modeling: current prac-
tices and applications, April 1999.
2. Basel Committee on Banking Supervision (Basel III), A Global regulatory
framework for more resilient banks and banking systems, December 2010.
REGION AND SECTOR EFFECTS IN STRESS TESTING OF COMMERCIAL LOAN... 229
Brian A. Todd, Douglas T. Gardner,
and Valeriu (Adi) Omer
Introduction
In the wake of the 2008 financial crisis, periodic capital stress tests were
implemented in order to ensure that banks carry enough capital to survive
severely adverse economic conditions. For each stress test, regulators pro-
vide economic scenarios and banks are required to forecast capital losses
under the scenarios. Each bank develops a forecast from their own unique
risk profile and the forecasted capital change provides a measure of the
capital cushion that is likely necessary to remain capitalized under severely
adverse economic conditions. The forecasted evolution of the bank’s capi-
tal under stress is used “to inform board decisions on capital adequacy and
actions, including capital distributions” [1].
In addition to forecasting the capital change itself, regulators have indi-
cated that, “The board should also receive information about uncertainties
The view expressed in this paper represents the personal opinions of authors and
not those of their current or previous employers.
Example Model
Throughout this chapter, we illustrate the general ideas using an exam-
ple forecasting model. The macroeconomic scenario is taken from the
2015 US Federal Reserve Bank Comprehensive Capital Analysis and
Review (CCAR) severely adverse scenario [3]. Revenue data ranging
from 2008Q1 to 2014Q4 was fabricated (Fig. 1a). We imagine that the
model developers identified two probable candidate macroeconomic
ESTIMATING THE IMPACT OF MODEL LIMITATIONS IN CAPITAL STRESS... 233
driver variables: corporate bond spreads (BBB) and the m arket volatil-
ity index (VIX) (Fig. 1b) but that the VIX was selected as the more
intuitive explanatory variable by lines of business. Hence, The Model
is a simple OLS model for quarterly revenue using the VIX as a sole
explanatory variable. The Alternative BBB-based model is a simple OLS
model for quarterly revenue using the BBB spread as a sole explanatory
variable.
Such a backtest where the backtested periods are omitted from the model
estimation is called an out-of-time backtest. The residual error distribution
observed in an out-of-time backtest provides an estimate of the forecast
uncertainty associated with residual model error. A disadvantage of out-of-
sample backtesting is that the sample of development data is reduced. Since
the range of the data used to develop capital stress testing models is often
already quite limited, it may be necessary to include some in-time backtests
where the entire data sample is used to estimate the model. When the resid-
ual model error estimates are heavily reliant on in-time backtests, one can
introduce a penalty that increases the model limitation buffer beyond con-
fidence intervals of the error distribution from the in-time backtests (see
Section “Shortcomings in the Model Development or Validation Process”
below). Backtests should be conducted over periods that are identical to
the forecast period in the stress test (e.g. nine quarters in CCAR stress test-
ing) using as many forecast jump-off points as are available.
Typically, forecasting models for stress testing make forecasts at quar-
terly or monthly frequency. However, the capital uncertainty is determined
by the forecast uncertainty accumulated over the entire nine-quarter fore-
cast period. Quarterly forecasting errors can be related to the cumulative
nine-quarter error only under certain restrictive assumptions regarding
the forecasting errors. For instance, for independent (i.e. uncorrelated),
identically distributed errors, the expected cumulative forecasting error
would be the quarterly forecasting error times the square root of the
number of forecasting periods. However, such assumptions are frequently
violated, and so it is better practice to directly estimate the cumulative
nine-quarter forecasting error. For models with a direct revenue impact,
such as income, expense, and loss models, the cumulative nine-quarter
forecasting error provides a direct connection to the capital uncertainty
with no restrictive assumptions regarding the distribution or correlations
of the residuals. Consequently, the distribution of cumulative nine-quarter
backtesting error is the most generally useful way to characterize the capi-
tal uncertainty impact of residual model error.
Figure 2 shows backtests for the example model based on the
VIX. Figure 2a compares the model output (tan line) with the quarterly
revenue data (black line). Differences between model output (tan lines)
and the historical data (black lines) define the model residuals (green
lines). It is the errors in nine-quarter cumulative revenue (Fig. 2b) that are
most directly related to the uncertainty in capital change over the forecast
(see Section “Relating Individual Model Uncertainties to Capital Ratio
236 B.A. TODD ET AL.
Fig. 2 Estimating the impact of residual error. The model forecasts quarterly
revenue (a) However, it is the error in the nine-quarter cumulative revenue that is
most directly related to capital uncertainty (b) The distribution of nine-quarter
cumulative errors indicates the expected forecast uncertainty due to residual model
error (c)
ESTIMATING THE IMPACT OF MODEL LIMITATIONS IN CAPITAL STRESS... 237
Failure of Induction
Induction is the process of generalizing from a sample of observations to
a more general rule. In forecasting, an inductive leap is always needed in
order to apply past experience to a prediction of the future. Concretely,
statistical forecasting models are typically regressed on historical data and
then used to forecast the future. In order for the past to possess relevant
clues regarding the future, induction must reasonably hold.
While criticisms of induction sound vaguely philosophical, there are
a number of model limitations commonly cited in the banking industry
that point to failures of induction. Examples include “limitations in data”,
“changes in business strategy”, and “changes in economic regime”. In
each case, the limitation points to the fact that the past relationships are
not generally indicative of the future, either because the sample is too lim-
ited, or because relationships have shifted; in other words, these limitations
indicate that an inductive leap based on the existing evidence may fail.
The impact of a failure of induction is difficult to estimate precisely
because such as failure indicates that the available information is irrelevant
or misleading. If the past is no indication of the future, then what is?
A potential workaround is to augment the development data with data
ESTIMATING THE IMPACT OF MODEL LIMITATIONS IN CAPITAL STRESS... 243
The derivation below describes how individual model errors relate to the
errors in the capital ratio change. At the start of the forecast, the bank
has qualifying capital Capital0 and risk-weighted assets RWA0. The initial
common equity Tier 1 capital ratio, CET10 is then,
Capital0
CET 10 = . (1)
RWA0
During the 9Q forecast, the bank has a 9Q cumulative net revenue, Net
Revenue, the value of the bank’s qualifying securities change over 9Q by
ΔValue, and the bank’s risk-weighted assets change over 9Q by ΔRWA. At
the end of 9Q forecast, the capital ratio is,
For the severely adverse scenario Net Revenue + ΔValue will typically
reduce capital. We drop Net Revenue + ΔValue from the right-hand term
for simplicity noting that this approximation will marginally increase the
apparent impact of ΔRWA. At any rate, changes in risk-weighted assets
ESTIMATING THE IMPACT OF MODEL LIMITATIONS IN CAPITAL STRESS... 245
are not typically a major contributor to the change in capital ratio, so this
simplification is not usually material. Making this simplification, we have,
Equation (5) has an intuitive form. The fractional capital ratio change
is the fractional change in capital minus the fractional change in risk
weighted assets. Net Revenue includes taxes which, in turn, depending on
other components of Net Revenue and ΔValue. For revenue and equity
losses experienced under a severely adverse scenario the taxes likely partially
offset the loss. In a Federal Reserve Bank of New York (FRBNY) Staff
Report, “Assessing Financial Stability: The Capital and Loss Assessment
under Stress Scenarios (CLASS) Model”, tax was approximated as 35% of
Pre − tax Net Revenue[6]. If this simplified tax treatment is acceptable, the
capital ratio change can be written,
Equation (7) is the basic equation for relating errors in model out-
put to errors in the capital ratio; model output is aggregated up to the
smallest set of models that can produce a component of Pre − tax Net
Revenue, ΔValue, ΔRWA, or any combination thereof. The errors in those
246 B.A. TODD ET AL.
c omponents can then be substituted into Eq. (7) to obtain one instance
of the capital ratio error for the model/set of models. Errors can arise
from residual model error, ambiguities in the model selection, shortcom-
ings in model risk management, and/or failures of induction. A distribu-
tion of δCET19Q produced in this manner represents the distribution of
errors in the capital ratio arising from a particular model/set of models.
The uncertainty in the capital ratio would then be given by analyzing the
distribution of capital ratio errors; in the simplest case, the uncertainty in
capital ratio could be obtained by evaluating the distribution of capital
ratio errors at a given confidence interval. The confidence interval chosen
should be consistent with a bank’s model risk appetite.
T
Model Limitation Buffer = δ CET 19Q × C × δ CET 19Q . (8)
The difficult
aspect of implementing Eq. (8) is obtaining the covariance
matrix, C . Correlations in the residual model errors can be estimated rela-
tively easily by analyzing the time-correlations between dynamic backtests
of different models, similar to those shown in Fig. 2b. Describing the
correlations associated with ambiguity in model selection would require
analyzing whether certain assumptions are common to multiple models,
possibly via sensitivity analysis. Further discussion of the covariance matrix
is beyond the scope of this chapter.
However, for models that are not overly complex, detailed estimation
of the covariance matrix may not be necessary. If the model errors were com-
pletely independent, then the square of the total uncertainty would be equal
to the sum of the squares of the individual uncertainties. However, as the
stress scenarios are, in fact, designed to cause simultaneous stress to many
different areas of the bank’s business, the assumption that model errors are
uncorrelated may not be prudent. Indeed, if a bank, for instance, experi-
enced heavy credit losses across its balance sheet during the 2008–2009
recession, then it is likely that model errors would be highly correlated.
In the limit of perfectly correlated model errors (i.e. all models perform
poorly at the same time), the total capital ratio uncertainty would be equal
to the sum of the individual model error uncertainties,
Model Limitation Buffer = δ CET 19Q ,1 , +δ CET 19Q ,2 + + δ CET 19Q , N . (9)
References
1. Board of Governors of the Federal Reserve System. (December 2015). SR
15-19 Attachment, Federal Reserve Supervisory Assessment of Capital Planning
and Positions for Large and Noncomplex Firms.
ESTIMATING THE IMPACT OF MODEL LIMITATIONS IN CAPITAL STRESS... 249
Roy E. DeMeo
Definition 1.1 Let V(t) be the value of your portfolio on a date t in the future.
Let P be the real-world (NOT risk-neutral) measure, and let Δt > 0 be a fixed time
horizon. Let t = 0 denote today. Then
The view expressed in this paper represents the personal opinion of author and
not those of his current and previous employers.
R.E. DeMeo (*)
Wells Fargo & Co., 12404 Elkhorn Drive, Charlotte, NC 28278, USA
( )
VaR ( p ) = min( | P {V ( 0 ) − V ( ∆t ) ≥ } ≤ 1 − p .
(1)
1. VaR tells you where the beginning of the tail is, and does not mea-
sure the overall tail impact on risk (expected shortfall, which will be
covered later, is a stronger attempt to do that).
2. VaR does not consider liquidity—a one-day VaR only considers
changes in mid market price, and does not consider inability to sell
at that price in extreme conditions. The concept of liquidity horizon,
the minimum time it takes to unwind the position and get some-
thing close to the market value, tries to address this issue. For this
reason we also will discuss ten-day VaR, which is part of Basel II
capital requirements and accounts for the fact that it might take ten
days, rather than one day, to unload a position.
3. Because of illiquidity and because of potential model risk on the
future realizations of portfolios, it is inaccurate, strictly speaking, to
think of VaR as saying that the bank will not lose “this much”
tomorrow with a 99% probability. It will be much worse if you try to
actually unload the position, and your models may be off.
4. VaR does not always behave well when aggregating portfolios,
meaning that the sum of VaRs for two portfolios is sometimes less
than the VaR for the aggregate portfolio. It is quite possible for two
portfolios to each have a VaR of less than $500,000, but the aggre-
gate portfolio to have a VaR of greater than $1,000,000. In other
words, the diversification principle can fail in the case of VaR.
Example 1.2 Suppose we have a single unhedged stock position that fol-
lows a real-world lognormal process
dS
= µ dt + σ dw.
S
P ( S ∆t < S0 − ) ≤ 1 − p.
ln ( ( S0 − ) / S0 ) − µ t + σ 2 ∆t / 2
N ≤ 1− p
σ ∆t
µ ∆t −
1 2
σ ∆t + σ N −1
( )
1− p ∆t
≥ S0 1 − e 2
µ∆t − σ ∆t +σ N −1 (1− p ) ∆t
1 2
VaR = S0 1 − e 2 .
x 1 t2
N ( x) = ∫
−
e 2 dt.
∞
2π
E ( ∆X i ) = µi
var ( ∆X i ) = σ i2
corr ( ∆X i ,∆X j ) = ρij .
∂V
δi = .
∂X i
256 R.E. DEMEO
d
L = −∆V ≈ −∑ i =1δ i ∆X i
d
M = −∑ i =1δ i µi
d
Σ = ∑ i =1 δ σ + 2∑ i < j δ iδ j ρijσ iσ j .
2 2 2
i i
VaRp = M + ΣN −1 p.
Full Revaluation
Now for each time t, and each position indexed by j, define
(
∆V j (t ) = V X 1, j (0) + ∆X 1, j ,t ,, …,, X n j , j (0) + ∆X n j , j ,t )
(
− V X 1, j (0),, …,, X n j , j (0) )
(2)
M
∆V ( t ) = ∑ j =1 ∆V j ( t ) . (3)
Finally, sort all N losses, (−ΔV(t))'s, one for each business day over the
N-day historical period ending yesterday, from high to low, and take the
one most closely corresponding to the 99th percentile. If using N=251, as
federal regulators allow, take the second worst loss. Note that this is not
really the 99th percentile of losses, really it is closer to the 99.2 percentile,
but the regulators do not allow interpolating between the second and
third worst P&L.
258 R.E. DEMEO
Delta-Gamma Approximation
The delta-gamma approximation to the daily historical P&L for a posi-
tion is
∂V n j ∂V j
2
1 ∂ Vj
∆V j ( t ) ≈ ∆t + ∑ i =1 ∆X i , j ,t + ∆X i2, j ,t
∂t 2
∂X i , j 2 ∂X i , j
∂ 2V j
+∑ i<k ∆X i , j ,t ∆X k , j ,t . (4)
∂X i , j ∂X k , j
This is just the second order Taylor series approximation to (2), in con-
trast to the first order approximation of Example 1.3.
The purpose of this approximation is to potentially save time; the
number of position values we need to compute for (2) is M(N+1), which
can be quite large. In practice, when computing (4) it is common to
leave out the cross gamma terms, which are usually (but not always)
relatively small, and also the time derivative (theta) term at the begin-
ning, to arrive at
n j ∂V j
2
1 ∂ Vj
∆V j ( t ) ≈ ∑ i =1 ∆X i , j ,t + ∆X i2, j ,t . (5)
∂X i , j 2
2 ∂X i , j
( ) (
∂ V j V j X 1, j ,,,,…,,,, X i , j + h,,,,…,,,, X n j , j − V j X 1, j ,,,, …, ,,, X i , j − h,,,, …,,,, X n j , j
=
)
∂ X i, j 2h
( ) (
∂ 2V j V j X 1, j ,,,,…,,,, X i , j + h,,,, …,,,, X n j , j − 2V j + V j X 1, j ,,,, …,,,, X i , j − h,,,, …,,,, X n j , j
=
).
∂ X i2, j h2
(6)
For certain risk factors, such as stock price, it is common to use relative
bump sizes, which means substituting hXi , j for h. Observe that the total
number of prices you need to calculate is now
QUANTITATIVE RISK MANAGEMENT TOOLS FOR PRACTITIONERS 259
#NPVs = M + 2 ∑ Mj =1 n j .
(
hi , j ≈ var ( ∆X i , j ) + mean ( ∆X ij ) )
2
(absolute)
2
∆X i , j ∆X i , j
hi , j ≈ var + mean (relative).
X
i, j X i, j
To see this, consider the case of a single risk factor, and the absolute
case. In that case we want
V ( X + ∆X ) − V ( X )
V ( X + h) −V ( X − h) 1 V ( X + h ) − 2P ( X ) + V ( X − h )
∆X + ∆X 2 .
2h 2 h2
1 1 1
V ′( X )∆X + V ′′( X )∆X 2 + V ′′′( X )∆X 3 + V ′′′′ ( X )∆X 4 ≈
2 6 24
1 1 1
V ′( X )∆X + V ′′( X )∆X + V ′′′( X )h ∆X + V ′′′′ ( X )h 2 ∆X 2
2 2
2 6 24
h 2 ≈ ∆X 2
260 R.E. DEMEO
on average, which implies the result stated above. The argument is very
similar for two or more risk factors, given that we do not include the cross
gammas in the approximation.
(X 1, j ) ( )
, …,X n j , j = G Y1, j , …,Yn j , j .
Then
T T
∂V j ∂V j ∂V j ∂V j
, …, = J −1 , …,
∂X 1, j ∂X n j , j ∂Y1, j ∂Yn j , j
∂X i
J i,k = .
∂Yk
s1 = (1 − R ) λ1
(1 − R ) λ1λ2 (1 − e− λ −λ 1 2
)
s2 = .
λ2 − λ2 e − λ1 + λ1e − λ1 − λ1e − λ1 − λ2
Then the Jacobian matrix, as discussed above, will have the following
entries, with the first two being on the first row and the last two being on
the last row:
∂s1
= 1− R
∂λ1
∂s1
=0
∂λ2
λ22 − λ22 e − λ1 −λ2 + λ1λ22 e − λ1 −λ2 − λ22 e − λ1 + λ22 e −2 λ1 −λ2 − λ1λ22 e − λ1
∂ s2 +λ12 λ2 e − λ1 − λ2 λ1e − λ1 −λ2
= (1 − R)
(λ )
2
∂λ1 − λ2 e − λ1 + λ1e − λ1 − λ1e − λ1 −λ2
2
λ12 + λ1λ22 e − λ1 −λ2 − λ12 λ2 e − λ1 −λ2 − λ12 e − λ1 −λ2
∂ s2 −λ12 e −2 λ1 −λ2 − λ1λ22 e −2 λ1 −λ2 + λ1λ22 e −2 λ1 −λ2 + λ1e −2 λ1 −2 λ2
= (1 − R) .
(λ )
2
∂λ2 − λ2 e − λ1 + λ1e − λ1 − λ1e − λ1 −λ2
2
Then, rather than bumping the credit spreads and backing out the haz-
ard curve each time, we can instead bump the hazard rates and use the
Jacobian formula shown above.
262 R.E. DEMEO
( ) ( ) (
V j X 1 + h− m ,,,X 2 ,,,…,,,X n j ,…,V j X 1 ,,…,,X n j ,…,V j X 1 + hm ,,,X 2 ,,,…,,,X n j )
for an absolute grid and
( ) ( )
V j X 1 (1 + h− n ) ,,,X 2 ,,,…,,,X n j ,…,V j X 1 ,,…,,X n j ,…,V j ( X 1 (1 + hn ) , X 2 ,…, X n j )
More generally, if we were using the kth risk factor, we would refer to
these NPVs as
Vk , − m , Vk , − m +1 , …, Vk , −1 , Vk , 0 = V , Vk ,1 , …, Vk , m .
Relative grids are more common because the major risk factors for
which banks employ a grid are typically spot price or volatility, which are
always positive. For a relative grid in the risk factor X1, suppose that
∆X 1
= (1 − ρ ) hk + ρ hk +1 , ρ ∈ [ 0,1].
X1
QUANTITATIVE RISK MANAGEMENT TOOLS FOR PRACTITIONERS 263
Then we set
This is the P&L contribution from the first risk factor. Though one-
dimensional grid approximations still leave out the cross term risk, they
are often more accurate than delta-gamma approximations for larger shifts
since the latter is a parabola in the risk factor shifts. This grows much more
rapidly than most pricing functions, in terms of spot, volatility, or interest
rate shifts. For example, a vanilla call option approaches linearity as spot
increases, and approaches zero as spot decreases. If we were computing
VaR for an at-the-money call with strike equal to spot, the parabolic func-
tion given by the delta-gamma approximation would become quite large
as spot approached zero, rather than going to zero, and as spot increased
in the other direction, the parabolic function would grow much faster
than a function approaching a straight line.
∆X i , j ,t = X i , j ( t ) − X i , j ( t − 10 ) ( absolute )
or
( X ( t ) − X ( t − 10 ) ) X ( 0 )
( relative ) .
i, j i, j i, j
∆X i , j ,t =
X i , j ( t − 10 )
T , T + 1, …, T + N + k − 1.
(
Then for a position with value V j X j ,1 , …,X j , n j , we would calculate )
the loss
( ) (
L = V j X j ,1 (0), …, X j , n j (0) − V j X j ,1 (0) + ∆X j ,1,t , …, X j , n j (0) + ∆X j , n j ,t )
t = T , T + 1, …, T + N − 1
∆X j ,i ,t = X j ,i ( t + k ) − X j ,i (t ) (absolute)
X j ,i ( t + k ) − X j ,i (t )
∆X j ,i ,t = X j ,i (0)(relative).
X j ,i (t )
The argument zero as usual stands for today or the most recent close
date, but the difference here is that the last close date does not coincide
with the last day of the history, namely T + N + k − 1, as it does for general
VaR. Typically for regulatory purposes k = 10, and it is standard practice
to choose the historical period so as to maximize the resulting total VaR
for the bank’s entire trading book subject to regulation. One example of a
very stressful period is the one-year period from April 2008 to April 2009.
Once again, the Feds allow as little as a year’s worth of starting times, that
is N = 251. For details on this, see [1], Sections 4 and 5.
QUANTITATIVE RISK MANAGEMENT TOOLS FOR PRACTITIONERS 265
n n!
P ( # exceptions ≥ m ) = ∑ k = m p n − k (1 − p ) .
k
( n − k )!k !
0 7.94% 100.00%
1 20.22% 92.06%
2 25.64% 71.83%
3 21.58% 46.20%
4 13.57% 24.62%
5 6.80% 11.05%
6 2.83% 4.25%
7 1.00% 1.43%
8 0.31% 0.42%
9 0.09% 0.11%
10 0.02% 0.03%
Note that the Feds will start to have serious doubts about the VaR model if
there are six or more exceptions in a year, and in fact, there are increasing capital
requirements for four or more.
266 R.E. DEMEO
F ( x1 , …,xd ) = P ( ∆X 1 ≤ x1 , …,∆X d ≤ xd ) .
Sort the losses in ascending order, and choose the loss at the correct
percentile, in other words if p=0.99 and N=10000, then choose the 9900th
loss.
We assume for this exposition that the actual historical risk factors
shifts, or time series, denoted by (ΔX1, t, …, ΔXd , t), t = T , T + 1, … , T + n,
are independent and identically distributed (i.i.d.). The model we create is
known as an unconditional model. A conditional model is based on a time
series in which we assume that the distribution of (ΔX1 , t, … , ΔXd , t) can
depend on the values of (ΔX1 , s, … , ΔXd , s) for s < t. There is a great deal of
literature on conditional models, (see [1]) but these are beyond the scope
of this brief overview.
QUANTITATIVE RISK MANAGEMENT TOOLS FOR PRACTITIONERS 267
X = µ + AZ
Z = ( Z1 , …,Z k ) , iid , ~ N ( 0,1)
T
A = d × k matrix
µ = ( µ1 , …,µ d )
T
X ~ N ( µ ,Σ ) .
Σ = A • AT ,
1. Compute the historical means over the last year, or several years,
denoted by μi and their historical standard variances, denoted by σ i2 .
2. Compute the historical covariances σi , j, rounding out the rest of
covariance matrix, denoted by Σ.
3. Use Cholesky decomposition to simulate (ΔX1, … , ΔXd) as multi-
variate Normal N((μ1, … , μd), Σ) 10000 times and each time
268 R.E. DEMEO
β=
E (( X − µ ) ) 3
σ3
E (( X − µ )
4
κ=
σ4
X 1 ,⊃ , X n .
n
X = n −1 ∑ i =1 X i
1 n
∑ i =1 ( X i − X ) ( X i − X )
T
S=
n
It is well known that using 1/(n − 1) instead of 1/n makes the covari-
ance estimator unbiased.
n −1 ∑ i =1 ( X i − X )
n 3
β (X ) =
(n −1
∑ i =1 ( X i − X )
n
)
2 3/ 2
n −1 ∑ i =1 ( X i − X )
n 4
κ(X ) =
(n −1
∑ i =1 ( X i − X )
n
)
2 2
1 2 1 2
T= n β + (κ − 3 ) .
6 4
t
1
Pr (T ≤ t ) ≈ ∫ 2 e − u / 2 du = 1 − e − t / 2
0
f ( X ;a ) , a = ( a1 , …,a m )
(∏ )
f ( X i ;a1 , …,am ) = ∑ i =1 ln ( f ( X i ;a1 , …,am ) ) .
n n
Λ = ln i =1
µ = ( µ1 , …,µ d )
T
W ≥ 0 is an r.v.
Z = ( Z1 , …,Z k ) ~ N k ( 0,I k ) , k ≤ n
T
Aε d ×k
X = µ + W AZ
X = µ + W Z.
E(X ) = µ
Var ( X ) = E (W )
β (X ) = 0
κ (X ) = 3
( E (W 2 ) ) Var (W )
= 3 1 + .
E (W )
2
E (W )
2
κ ( X ) = 3 (1 + Var (W ) ) .
272 R.E. DEMEO
X 1 = µ1 + W Z1
X 2 = µ2 + W Z 2
cov ( X ) = E (W ) Σ.
f ( x ) = ∫ f X |W ( x|w ) h ( w ) dw
D
w −n/ 2 ( x − µ )T Σ −1 ( x − µ )
=∫ exp − h ( w ) dw.
( 2π ) 2w
n/2
D Σ
β α e − β / w w−α −1
h ( w) = , w > 0,
Γ (α )
∞
Γ (α ) = ∫ xα −1e − x dx
0
α > 2, β > 0.
ν
Then the d-dimensional normal mixture random vector with α = β =
2
has what is known as the multivariate t distribution with ν degrees of
freedom. From the formula in section “A Simple Type of Fat-Tailed
Distribution”, it is possible to derive the density in closed form as
1
Γ (ν + d ) ( x − µ )T Σ −1 ( x − µ )
− (ν + d ) / 2
2
f ( x) = 1 +
Γ (ν / 2 )(πν ) ν
d /2
Σ
X = µ + AWZ .
ν
E (W ) =
ν −2
ν
cov ( X ) = Σ
ν −2
274 R.E. DEMEO
ν −2
Σ= Σ.
ν
ν +d
−
1 T −1
2
Γ (ν + d ) x − µ Σ x − µ
2 1 +
f ( x) = ,
ν d
ν −2 ν −2
Γ (πν ) 2 Σ
2 ν
−
ν +d
1 T
−1
2
Γ (ν + d ) X k − µ Σ Xk − µ
N
2 1 +
= ∑ ln
ν ν −2
ν −2
k =1 d
Γ (πν ) 2 Σ
2 ν
1 ν d N N
= N ln Γ (ν + d ) − N ln Γ − N − 1 lnν − ln Σ − ln (ν − 2 )
2 2
2 2 2
T
−1
X k − µ Σ X k − µ
Nd N ν +d .
− ln π − ∑ ln 1 +
2 k =1 2 ν −2
(A) Express the joint density of (X, W) as the product of the density of
W and the density of X|W.
(B) Estimate the parameters Σ, μ based on the latest estimates of your W
parameters and the known values of the samples Xi , i = 1 , … , n.
(C) Then do maximum log likelihood to get the parameters of W
using the density function h but you don’t have the Wi’ s so instead
,
you use expectations of certain functions of the Wi s which in turn
were derived from the latest Σ, μ and the distribution of W|X given
those parameters.
(D) Keep doing (A) and (B) until you achieve convergence.
We will now present a more precise, numbered, set of steps for the
t-distribution. First note that in the case of the t-distribution, W has only
one parameter ν, and by Bayes’ theorem we can express
f X |W h
fW | X = ,
fX
so that
ν / 2 −ν / 2w
ν
x − µ )T Σ−1 ( x − µ ) 2 e w−ν / 2−1
1 (
W |X ( )
f w|x =
exp −
d /2 2w
Σ wd / 2 Γ 1ν
( 2π )
2
276 R.E. DEMEO
ν
(πν )
d /2 ν d
Σ Γ +
( ) ( )
T −1
2 x − µ Σ x − µ 2 2
× 1 +
ν d ν
Γ +
2 2
ν d
+
ν + ( x − µ )T Σ −1 ( x − µ ) 2 2 ν + ( x − µ )T Σ −1 ( x − µ ) −ν − d −1
= exp w 2 2 .
2 2 w
ν d
α= + ,
2 2
ν + ( x − µ ) Σ −1 ( x − µ )
T
β= .
2
n n
= ∑ ln f X |W ( X i |;,Wi |;,µ |;,Σ ) + ∑ ln hW (Wi ;ν ) .
i =1 i =1
Armed with this important information, we can now give an exact rec-
ipe for the algorithm.
Step 1 Set
µ [1] = µ
Σ [ ] = Σ,
1
the sample means and covariances, and let ν[1] be some “reasonable” first
guess for ν. For notational conciseness, let θ[1] = (ν[1], μ[1], Σ[1]). Let k be
the iteration counter, and set that equal to one.
QUANTITATIVE RISK MANAGEMENT TOOLS FOR PRACTITIONERS 277
ν [k ] + d
α [k ] =
2
1
(
β i[k ] = ν [k ] + X i − µ [k ] ) ( Σ[ ] ) ( X )
T −1
− µ[
k k]
i
2
[k ]
α
(
δ i[k ] = E Wi −1 |;X i |;θ [k ] = ) β i[k ]
[k ] 1 n [k ]
δ = ∑δ i .
n i =1
Step 3 Let
n
∑ δ i[ ] X i
k
[ k +1]
µ = i =1
nδ [
k]
(X )( X )
n T
[k ] [ k +1]
− µ[
−1 k +1]
Ψ = n ∑ δi i −µ i
i =1
1/ d
Σ Ψ
Σ[
k +1]
= 1/ d
.
Ψ
Step 4 Define an intermediate set of parameters θ[k, 2] = (ν[k], μ[k + 1], Σ[k + 1]).
Then let
1
(
β i[k ,2] = ν [k ] + X i − µ [k +1] ) ( Σ[ ] ) ( X )
T −1
− µ[
k +1 k +1]
i
2
[k ]
α
(
δ i[k , 2] = E Wi −1 | X i ;θ [k , 2] = ) β i[k , 2]
(
Γ α [ k ] − ε β [ k , 2]
)( )
ε
( )
[ k , 2] [ k , 2] i
ξi = E ln Wi | X i ;θ ≈ε −1
− 1 ( ε small ) .
Γ α[ ]
k
( )
278 R.E. DEMEO
n ν
n
ν ν ν ν
∑ ln h (Wi ;ν ) = ∑ ln − Wi −1 − + 1 ln Wi − ln Γ ,
i =1 2 2 2 2 2
i =1
substitute δ i[ ] for Wi −1 and ξi[ ] for ln Wi. Now maximize function over
k ,2 i,2
Now let θ[k + 1] = (ν[k + 1], μ[k + 1], Σ[k + 1]), replace k by k+1, go back to Step
2, and repeat Steps 2–5 until you achieve convergence.
This algorithm can be generalized to any multivariate distribution which
takes the form of a normal mixture. The only difference might be that if
the density h(w) has more than one parameter, then Step 5 will be more
complex, and the conditional distribution fW|X(w| x) may be more chal-
lenging to work out, but other than that the algorithm would be the same.
Expected Shortfall
Within a few short years, it will no longer be acceptable to regulators for
banks to use general and stressed VaR as key risk management tools; they
will be required to replace it with a variant known as expected shortfall, or
ES for short.
To understand what ES is, think of VaR as a high percentile of the
possible losses; ES, on the other hand, is the average of the tail beyond a
certain percentile. More rigorously, define
ES (α ) = E P (V ( 0 ) − V ( ∆t ) |V ( 0 ) − V ( ∆t ) ≥ α ) .
α = VaR ( p ) .
over VaR, that is, it satisfies subadditivity. Precisely, given two portfolios
P1 , P2, and P = P1 + P2, then
ES ( P1 ;α ) + ES ( P2 ;α ) ≥ ES ( P;α ) .
VaR ( P1 ) = $110
VaR ( P2 ) = $100.
Stress Testing
A somewhat simpler, but equally important aspect of risk management is
the concept of stress testing. A stress scenario consists of applying a single
extreme set of shocks to the current values of banks’ risk factors, and com-
puting the change in net present value that results. Stress scenarios take
two forms, business as usual and CCAR, or comprehensive capital analysis
and review, which is the regulators’ annual review of all the major banks’
risk management practices.
A business as usual (BAU) stress scenario is a choice of two dates in the
past,t1 < t2. For a position indexed by j, we compute
( ) (
∆V j = V j X 1′, j , …,X n′ j , j − V j X 1, j , …,X n j , j )
280 R.E. DEMEO
X 'i , j = X i , j + X i , j ( t2 ) − X i , j ( t1 ) absolute
X i , j ( t2 )
X 'i , j = X i , j ⋅ relative.
X i , j ( t1 )
On the other hand, in the case of a CCAR stress scenario, the regulators
decide on a set of fixed shift amounts, so that
( ) (
∆V j = V j X 1′, j ,, …,,X n′ j , j − V j X 1, j ,, …,,X n j , j )
X 'i , j = X i , j + Ai , j absolute
X 'i , j = X i , j ⋅ Ai , j relative.
References
1. Federal Register, vol 77, No. 169, Rules and Regulations. Office of the
Comptroller of the Currency, August 2012.
2. Basel Committee on Banking Supervision, “Instructions:Impact Study on the
Proposed Frameworks for Market Risk and CVA Risk”, July 2015.
Modern Risk Management Tools
and Applications
Yimin Yang
Introduction
One of the important changes brought by the recent financial crisis is
the improvement in quantitative risk management tools used by financial
institutions. These tools are not necessarily software applications or sys-
tems provided by vendors, they also include quantitative methodologies/
models, metrics/measurements, and even processes developed by finan-
cial institutions. The purposes of these tools could be either for internal
risk management (such as credit risk ratings) or for regulatory compliance
(such as capital calculation), or both.
Not all of these tools are entirely new. Some tools have been used by
the financial industry for a number of years, but with increasing levels
of sophistication and complexity. However, others are recently devel-
oped to meet the challenges of the new regulatory and risk management
environment.
The view expressed in this paper represents the personal opinion of author and
not those of his current and previous employers.
Y. Yang (*)
Protivit, Inc., 3343 Peachtree Road NE, Suite 600, Atlanta, GA 30326, USA
Framework and Methodology
{
Prob Customer Default X1 ,, Xn ,Y1 ,,Ym , Z1 ,, Zs ,V1 ,,Vq }
1
=
(
− a0 + a1 X1 ++ an X n + b1Y1 ++ bm Ym + c1 Z1 ++ cz Z s + d1V1 ++ dq Vq )
1+ e
σ A2
r − ⋅t +σ A ⋅Wt
2
At = A0 e
Here A0 is the current asset value, r is risk free return, σA is the asset
volatility, and Wt is a standard Brownian motion. Let D be the debt, then
when the asset At falls below the debt D, the company will default. Hence
the default probability at time t is given
r − σ A2 ⋅t +σ A ⋅Wt
2
Pt = Prob { At > D} = Prop A0 e > D
D σ A2
ln − r − t
Wt A0 2
= Prob >
t σA t
where Φ is the cumulative distribution function for the standard normal dis-
A0 σ A2
ln +r − t
D 2
tribution and DTD = is called the distance-to-default.
σA t
However, this approach has a drawback: the asset value At and its
volatility parameter σA are not directly observable. Instead, we can only
observe the equity (stocks) Et and its volatility σE. Using Merton’s view
and the celebrated Black-Scholes formula for option pricing, we obtain
At σ A2 At σ A2
ln + r + t ln + r − t
D 2 − rt D 2
Et = At ⋅Φ − D ⋅ e ⋅Φ .
σA t σA t
To find the connection between σE and σA, we use Ito’s lemma to derive
At σ A2
ln + r + t
D 2
σ E ⋅ Et = σ A ⋅ At ⋅Φ
σA t
Using the historical stock prices Et and the two equations above, we can
solve At and σA to find the default probability
A0 σ A2
ln + r − t
D 2 = 1 − Φ DTD .
Pt = 1 − Φ ( )
σA t
PD ( t )
FPD ( t ) =
PD*
0
One Year Expected Default Rate = PD* ⋅ ∫ wt ⋅ ( F ( t + 1) − F ( t ) ) dt
∞
and
0
Lifetime Expected Default Rate = PD* ⋅ ∫ wt ⋅ (1 − F ( t ) ) dt.
∞
MODERN RISK MANAGEMENT TOOLS AND APPLICATIONS 287
n n n
Var ( L ) = Var ∑ Bi = ∑ ∑ Covar ( Bi , B j ) = ∑ ∑ ρi , j ⋅ σ Bi ⋅ σ B j
n n
i =1 i =1 j =1 i =1 j =1
Here ρi , j for i ≠ j is the default correlation between the ith and jth loan.
σ i = σ j = p (1 − p ) is the standard deviation. ρi , i = 1 and
If we assume that the default correlations among any two loans are the
same, then ρi , j = ρ and
288 Y. YANG
Var ( L ) = np (1 − p ) + ∑ ρ ⋅ p (1 − p ) = np (1 − p )
i≠ j
+ n ( n − 1) ρ ⋅ p (1 − p ) = p (1 − p ) n + n ( n − 1) ρ
1 1
So the standard deviation StDev ( ) = Var ( ) = n p (1 − p ) ρ + 1 −
n n
. Since the portfolio has n loans, the probability of default for the portfolio
StDev ( PD )
2
ρ≈
p (1 − p )
This is a quite simple but useful result for estimating default correla-
tions for pools of retail loans using historical default data. This is a very
popular method to derive the unobservable parameter ρ at the portfolio
level. The next example will introduce another method that can simplify
the loss calculation.
θ ⋅ Z + 1−θ ⋅ε i
Ai = e .
1 − θ ⋅Φ −1 ( x ) − Φ −1 ( p )
Prob {* ≤ x} = Φ .
θ
θ ⋅Φ −1 (α ) + Φ −1 ( p )
K = LGD * Φ − p * M Adj .
1−θ
asset correlation θ is chosen based on the types of the loans. For example,
θ = 0.15 for residential mortgages and θ = 0.04 for qualifying revolving
retail exposures.
Basel capital is often called “supervisory” or “regulatory” capital. It is a
prescribed risk measure that may not necessarily be based on bank’s indi-
vidual risk profile. Its biggest advantages are that it is simple and additive
(that is, the capital for a portfolio is the sum of the capitals for individual
loans in the portfolio).
Prob { ≤ ECα ( ) + E [ ]} = α .
ECα ( ) = VaRα ( ) − E [ ].
Furthermore, we have
ESα ( ) = E ≥ VaRα ( ) − E [ ]
292 Y. YANG
ESα ( ) ≥ ECα ( ) .
Most banks define hurdle rates (typically between 12% and 24%) and
require RAROC to be higher than the hurdles.
RAROC can be used in portfolio optimization so banks can determine
the optimal settings for the portfolio allocation or generate an efficient
frontier that describes risk–return trade-off for the loans.
MODERN RISK MANAGEMENT TOOLS AND APPLICATIONS 293
is always a number between zero and one. Now we are ready to give the
definition of copula.
This gives the credit deterioration pattern of a new portfolio in the first
six months.
One can easily check: Aage1 = Aage 1 × Aage 1
2 2
xample: PD Forecasting
E
The PD models we discussed earlier also need to be redesigned to incor-
porate macroeconomic factors. For example, PIT PD which is driven by
distance-to-default, can be linked to macroeconomic factors in two ways.
The first is to forecast company’s financials in the future. As both the
equity volatility and stock index are forecasted by the Fed, one can apply
the asset–equity relationship equations introduced at the beginning of this
chapter to estimate the PIT PD under various scenarios. However, the
forecast for a company’s financials and capital structure is not easy and
often inaccurate. This approach contains a lot of uncertainties.
MODERN RISK MANAGEMENT TOOLS AND APPLICATIONS 301
{PD1 , PD2 ,, PDn }
and use the relationship PD = 1 − F(DTD) to back out the DTD time
series by DTD = Φ−1(1 − PD)
{DTD1 , DTD2 ,, DTDn }.
In fact, for any given historical PD series {p1, p2, ⋯ , pn}, one can con-
vert it to a DTD series by applying the Probit transformation Φ−1(1 − p)
{DTD 1 = Φ −1 (1 − p1 ) , DTD2 = Φ −1 (1 − p2 ) , , DTDn = Φ −1 (1 − pn ) . }
Now we can use regression or other time series techniques to link
{DTD1, DTD2, ⋯ , DTDn} to macroeconomic factors, and to forecast
DTD under various scenarios. We can then use PD = 1 − Φ(DTD) to
convert the forecast DTD back to PD.
{,Q 2000 Q1 ,Q2000 Q 2 ,Q2000 Q 3 , ,Q2013 Q1 , Q2013 Q 2 ,Q2013 Q 3 ,Q2013 Q 4 ,} .
302 Y. YANG
{,W 2000 Q1 ,W2000 Q 2 ,W2000 Q 3 , ,W2013 Q1 ,W2013 Q 2 ,W2013 Q 3 ,W2013 Q 4 ,}
Notice that the last element in each W vector is the defaults that rep-
resent the estimated historical defaults for the current portfolio for each
historical period. Then we apply common regression or time series tech-
niques on these “estimated historical defaults” with respect to macro-
economic factors. This way, we can forecast future losses of the current
portfolio without using any conditional transition matrices.
Conclusions
Several modern risk management tools have been developed to address
many risk management problems in the wake of financial crisis of
2007–2008. In this chapter I have introduced these modern tools and
explained them in appropriate technical detail by using illustrative exam-
ples for how these tools are used in current market practice.
Note
1. Oldrich At Vasicek, Loan portfolio value, December, 2002, www.risk.net.
Risk Management and Technology
GRC Technology Introduction
Jeff Recor and Hong Xu
The view expressed in this paper represents the personal opinion of authors and
not those of their current and previous employers.
J. Recor ()
Grant Thornton, 27777 Franklin Road, Suite 800, Southfield, MI 48034, USA
H. Xu
American International Group, 614 Ashgrove Ln, Charlotte, NC 28270, USA
Islands, Curacao, Jamaica, Puerto Rico, and Turks and Caicos), Bermuda,
Guyana, and Trinidad and Tobago.
The IIA slightly changes the acronym definition for GRC to be,
“governance, risk and control”. In August of 2010 the IIA adopted sup-
port for the OCEG definition for GRC and added that GRC is about how
you direct and manage an organization to optimize performance, while
considering risks and staying in compliance. IIA stated clearly:
– GRC is NOT about Technology;
– GRC is NOT a fad or a catchy phrase for software vendors and
professional service providers to generate revenue.
Forrester Research
Forrester Research describes itself as, “one of the most influential research
and advisory firms in the world. We work with business and technol-
ogy leaders to develop customer-obsessed strategies that drive growth.
Forrester’s unique insights are grounded in annual surveys of more than
500,000 consumers and business leaders worldwide, rigorous and objec-
tive methodologies, and the shared wisdom of our most innovative clients.
Through proprietary research, data, custom consulting, exclusive execu-
tive peer groups, and events, the Forrester experience is about a singular
and powerful purpose: to challenge the thinking of our clients to help
them lead change in their organizations” www.forrester.com.
Analysts at Forrester were some of the earliest users of the abbreviation
GRC. Forrester has been well-known for producing research and more
specifically a product called the GRC Wave that clients can use to help
make decisions about GRC software vendors. Forrester’s work in the GRC
space is summarized on their website as follows: “Every organizational
business function and process is governed in some way to meet objec-
tives. Each of these objectives has risks, as well as controls that increase
the likelihood of success (or minimize the impact of failure). These are
the fundamental concepts of GRC. To maximize business performance,
GRC programs are designed to help companies avoid major disasters and
minimize the impact when avoidance is unlikely” https://www.forrester.
com/Governance-Risk-%26-Compliance-%28GRC%29.
According to Forrester, the Forrester Wave is a collection of informa-
tion from vendor briefings, online demos, customer reference surveys and
interviews, use of Forrester’s own demo environment of each vendor’s
product, and, as per Forrester policy, multiple rounds of fact checking
and review. The current iteration of the Forrester Wave was previously
split into two distinct reports- one for enterprise GRC (eGRC) and the
other for IT GRC. Trying to define the distinction between enterprise and
IT GRC has added to some of the marketplace confusion around GRC
platforms.
In addition to products like the GRC Wave, Forrester has started to
build what it calls a GRC Playbook. The playbook gives Forrester a new
way to package up important research and guides within the following
categories:
– Discover
– Plan
GRC TECHNOLOGY INTRODUCTION 311
– Act
– Optimize
The Forrester GRC Playbook was completed at the end of 2015.
Gartner
Here is the description of Gartner’s focus in the marketplace from its
website:
Prior to 2014, Gartner produced research for clients on the various GRC
vendors in the form of MarketScope and Magic Quadrant reports. Similar
in structure to Forrester’s Wave reports, the Gartner Magic Quadrant was
a collection of vendor data measured against criteria that produced a rank-
ing similar in format to the Forrester Wave.
In 2014 Gartner announced it was doing away with the Marketscope
and Magic Quadrant reports and retooling its research on the GRC mar-
ket to be more focused on specific use cases. According to a report from
Gartner released on May 13, 2015 entitled, “Definition: Governance,
Risk and Compliance”, Gartner provides this definition for GRC:
In the same report, Gartner goes on to explain that, “there are a growing
number of GRC software applications that automate various workflows
in support of GRC goals. Through common functions such as an asset
repository, regulatory mapping, survey capabilities, workflow functions
and data import, GRC automation addresses multiple use cases defined by
Gartner. The seven defined Gartner GRC use cases are as follows:
– IT Risk Management
– IT Vendor Risk Management
– Operational Risk Management
– Audit Management
– Business Continuity Management Planning
– Corporate Compliance and Oversight
– Enterprise legal Management.”
We are not trying to present an opinion on the merits of how the GRC
marketplace is viewed or defined. The information presented here shows
how many different firms view the marketplace for GRC technology plat-
forms. As you will read in the following sections, integration of an organi-
zation’s governance, risk, and compliance (control) functions is absolutely
key when it comes to gaining value from automation. Any definition that
does not support leveraging an integrated technology approach is prob-
ably not going to gain much momentum in the marketplace.
• Financial controls monitoring • Enterprise risk management • Environmental health and safety
• IT GRC • Sustainability performance
• Risk monitoring management
• Compliance management
• Performance and operational controls • Vendor risk management • Medical compliance
• Policy management • Food and Safety compliance
• Access & segregation of duties • Legal case management
• Audit management
controls
• Threat and vulnerability • Other stand alone solutions
management
The term GRC today can invoke strong feelings of support or apathy.
There are some practitioners who feel that the term GRC is too gen-
eral and does not represent anything new that organizations should be
doing. The feeling is that there is no such thing as a GRC department,
so undertaking projects that involve improving processes and technology
specifically as GRC does not accurately represent the operations of most
businesses. Organizations have been leveraging automation to improve
the governance, risk management, and compliance management functions
before there was a new integrated platform capability in the marketplace
to leverage.
As shown above, it is very common for the GRC concept to be associ-
ated with technology solutions rather than as a business-oriented solutions
approach. Despite the different interpretations of GRC being discussed
and addressed, there is one common theme that stands out in these dis-
cussions: clients view the automation and enterprise integration of gov-
ernance, risk, and compliance programs as critical areas for achieving
efficiency gains, improved transparency, and better control.
Governance
Corporate governance is the system of rules, practices, and processes by
which organizations are directed and controlled. Corporate governance of
IT is the system by which the current and future use of IT is directed and
GRC TECHNOLOGY INTRODUCTION 315
There are many practitioners that feel this model is not a good representa-
tion of how to effectively manage risk. The model is included here due to
its prominence as part of the Basel Commission on Banking Supervision
Operational risk requirements for banks. For financial institutions, this
model is a core part of a GRC program.
The use of GRC technology platforms to support functions related to
governance can have a big impact on helping to track and improve the
performance of the organization. There are many benefits to leveraging
automation to support governance functions:
– Provides more timely, accurate, and reliable information;
– Enables more informed decision-making for allocating resources;
– Saves costs by improving efficiency and reducing manpower hours
needed for administrative tasks related to reporting, policy man-
agement lifecycle, and executive oversight tasks;
– Assists in improving performance management by providing inte-
grated processes, accountability, and reporting;
– Supports a culture of process improvement.
Strictly from a technology support capability standpoint, there are
many solutions that would fall under the governance category. Leveraging
automation to support governance functions is an important and often
diminished component of a fully integrated GRC program. Examples of
governance functions that can take advantage of automation through a
GRC technology platform include:
– Whistleblower hotline tracking and monitoring;
– Board of directors reporting;
– Corporate strategy approval tracking;
– Executive compensation linked to corporate performance;
– Policy management;
– Performance management;
– Strategic objective monitoring;
– Portfolio management;
– “What-if” analysis for budgeting/resource allocation;
– Executive dashboard and reporting.
GRC TECHNOLOGY INTRODUCTION 317
Risk Management
There are a number of risk management standards that organizations can
use to help define a formal program. A widely accepted definition from
ISO 31000 states:
The standard goes on to describe the following functions as part of the risk
management process:
– Establishing the context;
– Identifying, analyzing, evaluating, and treating risk;
– Monitoring and reviewing risk;
– Recording and reporting the results;
– Communication and consultation throughout the process.
318 J. RECOR AND H. XU
Compliance Management
Failing to understand regulatory requirements or having the right con-
trols and culture in place can cost organizations in heavy fines and reme-
diation efforts. Automating compliance processes was one of the early use
cases for the acquisition of GRC technology platforms. Compliance man-
agement programs involve more than just managing a checklist of which
controls are required by which regulations. However, until recently there
has not been much in the form of guidance from standards bodies about
the functions a good compliance management program should contain.
It has been common to observe organizations that were managing risks
through the compliance checklist approach. In other words, if an organi-
zation could prove through testing that all controls required by regulatory
requirements were in place and operating effectively, those risks would be,
generally speaking, in check.
Recent guidance has been released to help organizations understand
leading practices associated with the compliance management function.
For example, the FFIEC (Federal Financial Institutions Examination
Council) Compliance Examination Manual listed the activities a compli-
ance management system should perform as part of the overall risk man-
agement strategy of an organization as follows:
– Learn about its compliance responsibilities;
– Ensure that employees understand the responsibilities;
320 J. RECOR AND H. XU
Integration
As GRC technology has matured it has become easier to make a busi-
ness case that clearly articulates the process improvements, efficiencies,
and cost savings that can be achieved leveraging GRC technology for spe-
cific use cases. However, just because technology can be utilized does not
mean that by itself benefits will be realized. Many of the efficiency gains
and cost savings are dependent on solid processes, clear direction, orga-
nizational cooperation, and support of the right technical capabilities. We
have seen many GRC projects fail, or simply not return the efficiency/cost
savings gains that were planned due to a lack of awareness about the role
that integration plays.
The marketplace is starting to use the term “integrated” related to
GRC in several different ways. There are some organizations that tout
an “integrated GRC capability”. Others tout the integration that occurs
between GRC programs, and still others mention integration benefits in
relation to interconnecting disparate data and systems. What tends to get
lost in the messaging is what is actually meant by “integration” and how
additional benefits can be derived if the effort to integrate the various
functions related to GRC programs can be realized.
In our experience, the term “integrated GRC” is redundant. Integration
is not something you apply to your GRC programs per se, but rather
drive through people, process, and technology improvements and innova-
tion. As increased levels of integration are implemented, greater benefit
can be achieved through all of the GRC functions. Ultimately, leveraging
integration through the improvement of GRC processes will enable the
connectivity between risks, strategy, and performance that can guide an
organization to achieve its overall objectives.
The think tank OCEG has pulled together some very good guidance
related to the topic of integration and its impact on driving principled per-
formance, which is defined as a point of view and approach to business that
helps organizations reliably achieve objectives while addressing uncertainty
and acting with integrity. We believe OCEG is correct in guiding organiza-
tions to, “... achieve principled performance - the capabilities that integrate
322 J. RECOR AND H. XU
their use of GRC platforms and processes, integration between the GRC
technology platform and the ERP (enterprise resource planning) system
could provide further benefits. While we have not seen a large demand
for these types of projects yet, they are slowly starting to gain attention
as organizations seek to integrate GRC with their finance functions (and
merge various systems that house controls) and vendor risk management
activities. Another use case that is gaining momentum in the marketplace
is leveraging the integration of these systems to provide continuous con-
trols monitoring as well.
Assessment Process
Within all GRC technology platforms is the ability to perform assessments.
Some of elements required to provide assessment capabilities include the
following:
– Link to business hierarchy. Being able to control which part
of the organization is involved in responding to an assessment
can assist in maintaining proper coverage. The business hierarchy
provides the ability to define the right layers of the organization
and to also insure approval workflows are designed appropriately.
– Survey capability. Many assessments are done in a survey style,
which requires a preset template of questions with set answers that
can be tracked through workflow for completion milestones.
– Questionnaire repository. Many assessment capabilities leverage
pre-existing questionnaire repositories in order to give the end
user the ability to formulate different types of assessments with
standard questions for the topic required.
– Scoring model for risk rating. A flexible scoring system is
required in order to provide feedback for assessments. Assessment
capabilities can provide scoring on a question-by-question basis
and then give the end user the ability to aggregate the scores into
a tiered scoring system. This capability usually supports a qualita-
tive and quantitative (and hybrid) scoring model.
– Workflow. The ability to direct assessment questionnaires to
intended audiences and track responses is provided through work-
flow capabilities.
– Link to content repository (risks, controls, etc.). Relying on
content to establish assessment criteria is a central component
of all GRC platforms. How this capability is performed can be
a competitive differentiator for vendors. Instead of relying on a
questionnaire repository for the focus of the assessment, link-
age to direct content can be a more effective means of designing
assessments.
GRC TECHNOLOGY INTRODUCTION 325
Business Hierarchy
Almost all of the use cases built using GRC technology platforms will rely
on the ability to leverage an organizational structure. The ability to put
an organizational hierarchy into a GRC tool should be one of the first
tasks an implementation should undertake. The impacts of the business
hierarchy on many of the tasks that are supported through automation
are critical to the success of the design of the system. For example, we
have often worked with clients to understand the types of reports required
at which level in the organization as one of the early planning stages of
any GRC technology implementation. This “start with the end in mind”
approach insures that the organizational hierarchy can be leveraged to
support the requirements of the solution being addressed. The ability to
also link processes and other assets with the appropriate accountability (as
driven through the business hierarchy) provides benefits for all tasks that
are performed using the GRC technology platform.
There are several design challenges that need to be considered before
implementation, such as the depth of levels used to configure the hierar-
chy, how the different layers roll up to a single parent entity and managing
exceptions/duplicates in the hierarchy. Again, a good rule of thumb is to
start with the end in mind, meaning design the reporting and account-
ability models that are needed and tie the organizational layers into that
model.
326 J. RECOR AND H. XU
Workflow
Workflow capabilities provide the ability to route data, forms, and pro-
cesses and to enable collaboration among stakeholders by leveraging the
organizational hierarchy and established security protocols and privileges.
In short, it enables the automation of repetitive tasks. Workflow capabili-
ties can vary widely among GRC vendor platforms. It is one of the capa-
bilities that are constantly being improved as the products mature.
GRC technology vendors have been making improvements to this spe-
cific capability. Many of the GRC technology workflow improvements
revolve around graphical capabilities and improving collaborations. The
ability to drag and drop processes and requirements into a master work-
flow solution makes building routine tasks requiring workflow support
fast and easy. A typical workflow capability should support the following:
– rules-based notifications that can be automatically generated via
parameters including dates;
– ability to provide different routing mechanisms based on different
inputs;
– ability to route based on roles and responsibilities;
– ability to support user reassignment;
– ability to integrate multiple documents and data sets;
– ability to provide multiple notifications and alerts;
– ability to collaborate on a set of data and/or forms.
Analytics and Reporting
Analytics and reporting capabilities within GRC technology platforms can
differ greatly. A small selection of vendors has taken the approach of creat-
ing their own analytics and reporting engines. Some vendors will use their
own engines for basic reporting but rely on third-party reporting tools to
provide more complex data analytics and presentations. And still another
set of GRC vendors have built direct links to third-party analytics and
reporting capabilities as the core analytics and reporting engine.
Regardless of which GRC technology vendor is used, reporting of
results against business outcomes has been one of the major focal points of
GRC platforms. As GRC technology platforms have matured, just like with
workflow capabilities, demands for more flexibility and improved analyt-
ics, and presentation capabilities increases each year. Risk aggregation by
business hierarchy or product/service category is a must-have capability
GRC TECHNOLOGY INTRODUCTION 327
Jeff Recor and Hong Xu
The view expressed in this paper represents the personal opinion of authors and
not those of their current and previous employers.
J. Recor ()
Grant Thornton, 27777 Franklin Road, Suite 800, Southfield, MI 48034, USA
H. Xu
American International Group, 614 Ashgrove Ln, Charlotte, NC 28270, USA
Risk Scenario
Assessment
Loss Events Process
Analysis
Objective Setting
One of the keys to any good GRC program is capturing and measuring the
strategic objectives of the organization. The COSO framework describes
this process as follows.
Reporting—reliability of reporting
Compliance—compliance with applicable laws and regulations.
Goal of the Use Case Establish and track strategic objectives and associ-
ated actions/challenges. Performance metrics can also be established and
tracked.
Benefit of the Use Case Establish strategic objectives that are linked to
risks, controls, issues, and actions as well as performance metrics. Having
a fully integrated approach will assist the organization in getting better
visibility with its progress in meeting its strategic objectives.
Risk Register
The use of a risk register is one of the most common use cases we have
seen using GRC technology at an enterprise level. The ability to capture
a repository of risks that impact the enterprise, along with ownership,
control, and metric attributes has assisted organizations with performing
enterprise risk tasks. The ability to link processes and people throughout
the organization to a risk register also enables better reporting and moni-
toring of those risks, enables organizations to improve their accountability
related to risks and associated remediation activities and improve the over-
all awareness of those risks and associated objectives.
Benefit of the Use Case This specific use case is a good way to start
leveraging automation to support enterprise risk functionality. Providing a
risk register enables other enterprise functions to take advantage of auto-
mation and standardized process, such as risk assessments, control, policy,
and process linkages as well as remediation planning and performance
metrics and reporting.
Risk Categorization
According to the Institute for Risk Management, “An important part of
analyzing a risk is to determine the nature, source or type of impact of
the risk. Evaluation of risks in this way may be enhanced by the use of a
risk classification system. Risk classification systems are important because
they enable an organization to identify accumulations of similar risks. A
risk classification system will also enable an organization to identify which
strategies, tactics and operations are most vulnerable. Risk classification
systems are usually based on the division of risks into those related to
financial control, operational efficiency, reputational exposure and com-
mercial activities. However, there is no risk classification system that is
universally applicable to all types of organizations.”
As an example, the BASEL II accords call out three top-level categories
of enterprise risk that banking entities must address to include market
risk, operational risk, and credit risk. A report by The Working Group
as part of the International Actuarial Association called, A Common Risk
GRC TECHNOLOGY FUNDAMENTALS 339
• Market Risk;
• Credit Risk;
• Insurance and Demographic Risk;
• Operational Risk;
• Liquidity Risk;
• Strategy Risk;
• Frictional Risk and
• Aggregation and Diversification Risk.
Goal of the Use Case Provide an automated foundation for the GRC
technology platform to be able to manage layers of risks to support the
many different functions requiring linkages (controls, assets, policies, pro-
cesses, etc.).
Benefit of the Use Case Designing this use case as part of the risk regis-
ter roll-out will enable a long-term integrated GRC technology capability.
It is recommended that this type of capability be designed up front as part
of the early GRC technology planning efforts to prevent having to go
back and correct or completely redo the risk architecture required for an
enterprise solution.
340 J. RECOR AND H. XU
Many of the use cases that follow are a part of an enterprise risk man-
agement framework. We did not call out capabilities such as risk culture,
event identification, risk assessment, risk response, control activities, infor-
mation and communication, and monitoring because many of these have
traditionally been automated through other use cases mentioned below.
Even though there is a specific enterprise level need for all of these capa-
bilities, most organizations we have reviewed or assisted have started to
build these capabilities using other use cases (department level or IT spe-
cific) and then grown the capabilities across multiple functions to form an
enterprise capability.
1. internal loss data that are collected over a period of time and repre-
sent actual losses suffered by the institution;
2. external loss data that are obtained from third parties (mentioned
below in the loss event use case) or other financial institutions;
3. scenario data based on models used to predict losses in the future;
4. business environment and internal control factors that produces a
score to measure operational risk thresholds.
There are many different measurement models that can be used as part
of the operational risk management program, such as the loss distribution
approach (LDA), Monte Carlo simulations, value-at-risk (VaR), which
342 J. RECOR AND H. XU
relies on expert judgment leveraging internal and external loss data and
other sources. Whichever model is used as part of the scenario analysis
process can benefit by the use of automation to assist with linking data sets
and calculating expected loss outcomes.
There is a lot of scrutiny around the use of measurement models as part
of the operational risk scenario modeling process. At the time of writing,
global regulators were considering various options for changes to the capi-
tal modeling approach. The main goal under consideration is to simplify
the AMA.
GRC technology vendors have been integrating AMA capabilities into
their risk and compliance capabilities to assist financial institutions with
leveraging more information in calculating risk models and capital require-
ments. Since GRC technology solutions typically house risk registers, link-
ages to assets (business services, processes, etc.), key risk indicators, and
control libraries, adding operational risk calculation methods is a natural
progression of their capabilities. However, this capability is fairly immature
and new in the GRC technology ecosystem.
Goal of the Use Case Provide the ability to collect and aggregate mul-
tiple data sources so emerging risk scenarios and capital modeling can be
performed.
Benefit of the Use Case There are two primary benefits. First, is a link-
age to multiple sources of data to assist with scenario analysis, and second
is to make the process more efficient to support workshops and other core
methods of monitoring operational risks.
data and knowledge so that operational risk processes and metrics can be
baselined and shared across the peer group. This sharing of emerging and
historical risk information enables peers to get better insight into emerging
risks that may be impacting one of the group members before manifesting
within their own environment. The ORX, for example, utilizes a GRC
technology platform to house the loss event data that it processes and
shares with its members. Each of these consortiums publishes their own
data architecture that member firms follow in order to share information.
There are additional sources of emerging risks that can be collected and
tracked through surveys, blogs, regulators, various industry information
sharing and analysis centers (ISAC), and organizational risk management
conferences and peer group meetings.
Goal of the Use Case Provide a process and automated support function
to collect, analyze, and share loss event data to assist with scenario analysis
and capital planning processes. Provide the ability to track scenarios with
associated events and potential loss outcomes per scenario.
Benefit of the Use Case The use case by itself will enable the analysis of
events by linking to scenario analysis and capital planning processes. An
integrated use case capability will also enable a larger analysis of risk events
linked to control and remediation processes enabled through aggregation
to formulate an enterprise point of view.
Compliance Management
GRC technology solutions for compliance management have focused
on establishing a control framework and documenting and responding
to policy and regulatory compliance issues. GRC technology support has
focused on enabling enterprise compliance processes, performing assess-
ments, and testing for deficiencies and managing remediation efforts. Many
different GRC vendors offer compliance management solutions that may
contain specific capabilities for individual regulatory requirements such as
PCI compliance, FISMA compliance, GLBA compliance, HIPAA compli-
ance, AML compliance, and other assorted regulatory requirements. As
mentioned above in the GRC overview section, providing support for
one-off compliance requirements is becoming obsolete.
GRC TECHNOLOGY FUNDAMENTALS 345
Goal of the Use Case Provide a foundation to reduce the time and effort
required to assess and test compliance requirements.
Governance
– provide oversight on who can make changes to the control library;
– provide guidance on what changes are required based on the
requirements;
– insure taxonomy and metadata criteria are harmonized with orga-
nizational standards;
– establish workflow process to insure proper review and approvals.
Data Management
– provide management of the control library;
– provide management of the mapping process;
– provide and manage architecture requirements;
– provide oversight on the content feeds and intelligence.
Assessment
– provide accountability for priority and impact assessments;
– provide consistent mechanisms to perform assessments.
Reporting
– provide clear establishment of metrics and objectives to be
monitored;
– maintain dashboards and reports to provide visibility into the
effort required.
GRC technology platforms have started to focus on this use case over
the past couple of years. There are a couple of challenges with adding
this capability into the GRC technology ecosystem. Regulatory change
management is a labor intensive process due to the sheer number of regu-
latory requirements, formatting of the source documents and managing
the impact determination process. Due to the manual nature of some of
the processes related to regulatory change management, GRC technol-
ogy platforms are limited in the value they can deliver. For example, the
holy grail of automation within this use case is to be able to detect a
regulatory change and automatically map the requirement changes to the
GRC TECHNOLOGY FUNDAMENTALS 347
Goal of the Use Case Provide the ability to understand and manage the
regulatory landscape as applicable to the operating environment.
Policy Management
From the viewpoint of GRC technology capabilities, providing support
for a policy management solution can have several different meanings. For
example, the term can be used to mean support for the policy lifecycle
management process. The more common way we have seen this solu-
tion utilized is to provide support for treating policies as controls to be
mapped into the control library in order to establish an audit trail for why
something is required. It is a good idea to seek clarity from the potential
vendor for what policy management support means within their solution
framework.
Leveraging GRC technology platforms to support policy lifecycle man-
agement implies performing tasks such as creating, updating, publish-
ing, maintaining, communicating, and enforcing policies. It also means
there needs to be support for training and awareness campaigns as well
as a process for measuring adherence to policies. At the heart of these
tasks is providing the capability to perform document management tasks,
providing services such as check-in/check-out capabilities and associated
workflows for the creation, change, approval, and exception management
348 J. RECOR AND H. XU
Goal of the Use Case Provide a workflow capability for the support of a
policy management solution.
IT Risk Management
There are many United States-based standards that describe the minimum
functions that should be performed as part of any risk management pro-
gram for IT. Some of the popular standards include:
GRC TECHNOLOGY FUNDAMENTALS 349
Goal of the Use Case Provide the ability to capture threats and vulner-
abilities and link them to assets in order to be able to calculate risk.
Benefit of the Use Case There is a significant benefit for greater risk
visibility if integration into the GRC platform can be achieved. Linking
the threats, vulnerabilities, internal audit findings, assessments, and other
findings/log data will provide better visibility into risks.
Incident Management
Incident management is a use case that could be considered common
across multiple GRC technology solutions that require its functionality
such as risk and control self-assessment, threat and vulnerability man-
agement, internal audit, business continuity management, and others.
However, the GRC capabilities offered by this technology solution can
also be considered a use case onto itself by offering functionality described
below. Several standards provide common functions generally associ-
ated with an incident management program. Generally speaking, leading
352 J. RECOR AND H. XU
Goal of the Use Case Provide support capability to track the incident
lifecycle.
Benefit of the Use Case The use case by itself offers cost savings and effi-
ciencies over using other more primitive capabilities or recording events
manually. The bigger benefit comes when incident management is fully
integrated across several GRC solutions.
Remediation Planning
This use case is also commonly used across multiple GRC solutions, specif-
ically linked with incident management and the risk assessment use cases.
Remediation planning involves the following tasks:
– identify the steps needed to mitigate the incident;
– assign ownership to each step and milestones needed for
completion;
– establish communication mechanisms (notifications, alerts);
– assign dates to each step attached to thresholds and milestones;
– establish approval process for each task and required to close the
incident;
– establish exception tracking and approval process.
The use case for remediation planning is usually leveraged across many
different use cases as a process requirement for internal audit, vulnerability
management, business continuity management, assessment, and testing,
along with many others.
Goal of the Use Case Provide ability to track tasks and assigned owner-
ship and due dates.
extent key control indicators (KCI). Generally speaking, key risk indica-
tors measure how risky a process or activity is or could potentially be, while
key performance indicators generally measure how well something has
performed or if that performance is meeting set objectives. Indicators can
be leading, current, or lagging and quantitative or qualitative in nature.
Indicators are an important tool within risk management, supporting
the monitoring of risk. They can be used to support a wide range of risk
management processes, such as risk and control assessments and testing,
the establishment and management of a risk posture or baseline, and over-
all risk management program objectives.
There are many standards and guidelines that can be used to help set
up and maintain a key risk monitoring program. Guidance from organiza-
tions such as ISO, NIST, and ISACA has been released to help organiza-
tions with setting up indicators to support risk management processes.
Support organizations such as the Institute for Operational Risk, the KRI
Exchange, and many other entities produce support materials to assist
organizations with establishing their monitoring programs.
A key risk monitoring program is based on managing to an expected
risk profile, which is supported through the analysis of key risks. Key risks
are supported through the trending of key risk indicators, metrics, thresh-
olds, and reporting capabilities.
From a GRC technology support viewpoint, there are several capabili-
ties that can be leveraged to support indicators:
– data repository to host indicators and metrics;
– the establishment of a risk taxonomy;
– the definition of key risks;
– the establishment of indicators;
– the establishment of supporting metrics;
– the ability to analyze trending information;
– reporting and dashboards.
As GRC technology platforms typically host risk registers, control
libraries, and various other data sources, the addition of risk indicators and
metrics data is a natural fit into the architecture of the platform. The ability
to leverage workflow and reporting capabilities native to most GRC tech-
nology platforms will support the requirements needed to implement a
key risk monitoring solution. The ability to connect to third-party sources
of data into the GRC technology platform is an important consideration
for hosting indicators within the architecture.
GRC TECHNOLOGY FUNDAMENTALS 355
Goal of the Use Case Getting started, the goal is to provide visibility
into meeting performance objectives, maintaining a risk appetite, and
monitoring control effectiveness. Establishing key risks and their associ-
ated metrics and data sources is a common starting point.
• planning;
• due diligence;
• contract negotiations;
• monitoring;
• termination.
1. Pre-contract phase;
GRC TECHNOLOGY FUNDAMENTALS 357
2. Contract phase;
3. Post-contract phase.
Benefit of the Use Case Provide more focus on third-party risks and
controls, better visibility into third-party risk posture, and a reduction in
third-party risks.
Audit Management
From a GRC technology solution viewpoint, this use case can provide
support for the IT audit lifecycle. Audit management processes have tradi-
tionally been supported through software that is just focused on support-
ing the unique requirements pertaining to the audit lifecycle. However, as
GRC solution vendors improve their support capabilities, more organiza-
tions are starting to integrate their IT audit management program into
the GRC technology platform due to the benefits of integration that these
platforms can provide.
358 J. RECOR AND H. XU
Goal of the Use Case To increase the productivity of the internal audit
team.
Benefit of the Use Case provide easier access to people, processes, and
data needed at each stage of the audit lifecycle process.
Privacy Management
This use case focuses on data privacy management, which is the funda-
mental protection of client’s and employee’s personal data. Since the USA
does not have a dedicated data protection law, there is no singular concept
of ‘sensitive data’ that is subject to heightened standards. However, several
different industry sector laws provide definitions of the type of data that
should be protected, especially if it is defined as personally identifiable
information or sensitive information.
Data privacy has become a very important topic within several sec-
tors such as financial services, healthcare and retail or other sectors where
sensitive personal information is collected, retained and utilized. Identity
theft continues to be a major problem for organizations. There are over
100 different global laws that direct organizations to protect data. In
the USA, instead of having single data protection laws there are regula-
tions by industry and also by inclusion into different federal and state
laws. In the financial services sector, for example, GLBA contains privacy
requirements. In the healthcare sector, HIPAA and HiTECH (Health
Information Technology for Economic and Clinical Health Act) have
privacy requirements. In addition to regulatory requirements, several
standards and guidelines have been developed or modified to assist orga-
nizations with protecting data. The American Institute of CPAs (AICPA)
released the Generally Accepted Privacy Principles (GAPP) in 2009 as a
way of describing certain practices that should be performed to protect
data. Privacy is defined in GAPP as “the rights and obligations of indi-
viduals and organizations with respect to the collection, use, retention,
disclosure and disposal of personal information”.
Core functions required to protect data include the following:
– establishment of policies;
– compliance with laws and regulations;
– risk assessment;
362 J. RECOR AND H. XU
– privacy-related controls;
– incident management;
– breach notification;
– awareness and training.
This use case is commonly lumped in with other IT GRC use case since
many of the capabilities required to protect data and comply with regula-
tory requirements already exist as part of other use cases. While some of
the GRC vendors provide separate capabilities to handle the requirements
of a policy management program others simply rely upon the integrated
nature of the solutions to provide the necessary capabilities.
were for silo capabilities with no plans to be extended across the enter-
prise, or to include other GRC solutions. The approach was to solve a spe-
cific problem with a technology platform that may be able to be utilized
for other things in the future. This is why it is fairly common to find GRC
technology platforms installed from different vendors within the same cli-
ent. The GRC technology marketplace (as a whole) was not necessarily
focused on selling the benefits of integration using a single platform for
multiple stakeholders.
GRC technology vendors typically market their solutions to organiza-
tions by packaging together capabilities into modules. Performing a quick
scan of some of the leading GRC vendors will show a collection of the
most common use cases into modules:
– corporate compliance;
– business resiliency;
– audit management;
– operational risk management;
– IT security risk management;
– policy management;
– third-party management.
In addition to these common use cases, GRC vendors may also pack-
age up capabilities and label them as “solutions”—such as a specific com-
pliance regulations (PCI-DSS assessment, ISO 27001 readiness, COBIT
Framework, etc.). In our opinion these capabilities are not really solutions
and should be designed to be leveraged as part of the integrated approach.
Many of these specific solutions can be included into their respective con-
tent libraries (risks, policies, and controls) and leveraged across multiple
functions.
Another challenge with how GRC vendors position their solutions
revolves around identifying which modules are needed to address spe-
cific capabilities. For example, in order to perform a privacy assess-
ment, do you need to purchase the privacy module (if such a thing is
offered) or can you repurpose some of the capability within the risk
management module to perform the tasks required? The way in which
many of the GRC vendors are expanding their offerings is starting to
diminish this challenge, but some confusion still exists over how much
customization and repurposing can be done within existing modules to
perform new tasks.
364 J. RECOR AND H. XU
Governance
Financial Policy Risk Portfolio
Strategy
Management Management Management Management
Operational Management
the overall GRC program, where that committee reports into and
who is a member of that committee (GRC stakeholders).
– As part of the operational management model, also plan out the
roles that will be required to support the technology platform as
part of the GRC technology support program. This is normally
done in addition to figuring out who owns the actual GRC tech-
nology platform and associated solutions and who will be respon-
sible for directly supporting the platform. There are core roles
that are required to not only support the technology but also to
provide business support for the capabilities as well, such as:
– solution architects;
– business analysts;
– solution developers;
– platform configuration specialists;
– trainers;
– BAU support personnel.
– Some of these roles can be performed by the same person, espe-
cially as the GRC technology capability is initially developed.
However, over time there may be a demand to drive more integra-
tion across the enterprise, so these roles will grow in importance.
Finding skills in the marketplace that understand technology and
business processes related to GRC processes and functions are in
big demand.
– regulations;
– configuration data;
– key risk metrics;
– questionnaires;
– other sources of data.
The most commonly used content for GRC technology solutions
revolves around controls, risks, and assets.
A control library is critical in order to be able to provide efficiency and
cost savings for automated compliance management tasks. Early forms
of GRC use cases for risk and control self-assessments and control test-
ing utilized dedicated controls and associated question libraries that were
assigned by regulation. For example, PCI-DSS would have a set of con-
trols (and corresponding questions for assessment and testing) and corre-
sponding support questions just for that regulation. HIPAA, GLBA, and
other regulatory requirements would all have their own defined control
and associated question sets. While this architecture did provide a benefit
over using Excel spreadsheets or doing things manually, it did not address
several common challenges such as having to continually produce the
same evidence to show compliance with multiple regulatory requirements.
A more effective approach is to use an integrated control library as a
master repository of controls. The idea behind the integrated control library
is that all controls are harmonized and indexed together, while the dupli-
cates are removed from the master library. Using this technique significantly
reduces the number of control requirements that have to be used for test-
ing while increasing the amount of regulatory coverage an assessment can
provide. This integrated control set then would act as the central repository
for all control related decisions. For example, if a control is mapped against
several requirements, then testing it one time should suffice as coverage
against all of the requirements that it is mapped against. There are sev-
eral prominent IT-focused integrated control libraries in the marketplace
that organizations can purchase to add to their GRC technology platforms,
such as the Unified Compliance Framework, (UCF). Other sources of inte-
grated control libraries can include open source options from some of the
standards bodies (ISACA, ISO, FFIEC, etc.) as well as from GRC vendors
themselves, consulting organizations, and law firms (Fig. 4).
There are several design challenges related to how an organization decides
to implement an integrated control library. Even if you purchase a library
from a third-party vendor, there may still be significant work to adjust the
368 J. RECOR AND H. XU
Data Feeds
Feeds can include sources that are internal, Law, Rule and Regulatory specif ic sites, standards bodies, common
practices, third party providers, law f irms, GRC plat form vendors and various other vendors.
Architecture
Risk Domains
• Control Objective
• Control Activity
Sources Master index
• Test procedure
Requirements
Solution Details
• A base is selected to provide the initial set of domains to build the index, from standards such as ISO, NIST, COBIT or
others
• Elements such as geography, business unit, applicability and others are designed into the architecture as support fields
• Provides full traceability back to all sources of the requirement
Assets
While control and risk data has been the biggest focus for GRC technol-
ogy platforms asset data should be considered equally important. The abil-
ity to link assets such as processes to risks and controls can provide a big
benefit to many GRC solutions through automation. While asset data is
a crucial part of a GRC program, we have not seen a focus on this capa-
bility from GRC technology vendors. While GRC technology platforms
can typically handle some amount of assets and linkages with risks and
controls, we do not see them scaling to compete with larger data ware-
houses or configuration management databases anytime soon.
There are probably a couple reasons for this. It has been our expe-
rience that organizations have struggled to get a handle on fully
GRC TECHNOLOGY FUNDAMENTALS 371
Technology Architecture
There are many different ways to define what is meant by the word archi-
tecture. In this case, we are referring to the underpinnings that support
the data, users, and applications that make up the GRC ecosystem. As part
of this description of foundational elements, we are specifically highlight-
ing the ability to link together systems and data in order to get the benefit
from an integrated set of capabilities.
GRC technology vendors have been making significant investments
into product capabilities that provide a one-stop shop for solution
requirements. The reality is no single GRC system can yet handle all of the
requirements of a GRC program. There are simply too many applications
and sources of data that already exist within an organization that it is not
GRC TECHNOLOGY FUNDAMENTALS 373
practical to migrate them all over into a single platform. On top of this,
GRC technology platforms are not yet capable of handling enterprise scale
processing requirements. In fact, GRC technology platforms may not be
the best option for some of the functions that GRC programs require for
support.
There are several elements related to a technology architecture that
should be considered up front before implementing a GRC technology
platform:
– Processing capability. As part of an overall data architecture, is
the GRC technology platform the right system to house the solu-
tion and associated data? Is it better to house the data in another
system, and just pull what is needed for processing? Will growth
of the solution cripple the processing capability of the platform?
– Third-party linkages. How a GRC platform provides the ability to
connect with other applications and systems is important for long-
term growth of the platform. Considering how the GRC technol-
ogy platform aligns within the technical infrastructure will prevent
having to redesign the platform or data structure as the solution
demand grows.
– User interface. One of the biggest complaints regarding GRC
technology platforms we have heard universally involves this ele-
ment. We have found that this aspect of the technology platform
should be minimized so that only the super users and support staff
required to maintain the systems regularly access the user interface.
The sooner you can get end users to interact with reports, dash-
boards, and homepages the more this element can be minimized.
• Vendors services: The GRC vendors have people who are very
knowledgeable about their specific product capabilities. Over time,
as more clients implement their capabilities to enable more solutions,
their experience level grows across industries and solution categories.
Generally speaking, the GRC vendors are strong on their product
but weaker on industry knowledge and processes, GRC program
functions and the associated integration benefits that can be derived.
• Consulting firms. Basically, almost the complete opposite of the
vendors. Most consulting firms are deep on industry knowledge and
support processes, GRC program functions and tasks, but weaker
on the technology support capabilities. Over time, similar to vendor
services teams, consulting firms build up architecture and techni-
cal expertise by industry that can be invaluable. The marketplace
demand for GRC automation capabilities is so fierce that keeping
these skill sets for a long period of time can be challenging. Some
consulting firms are more generalists and attempt to provide support
for a wide range of tools, while others specialize in only a handful
of tools.
• Miscellaneous third parties. There are many other types of firms
such as law firms, market analyst firms, and specialists that may be
able to provide targeted assistance or specialized services tailored to
a particular need.
there are similar themes observed almost every year these types of predic-
tions are made. For example, in 2007 some of the trends that were noted
for GRC were identified as:
– technology will continue to evolve and mature;
– entrance into the marketplace of software heavyweights;
– a GRC ecosystem will develop;
– risk and regulatory intelligence will improve.
Many of these predictions still hold true today. Instead of using this
section to discuss upcoming trends along these lines, we wanted to point
out where we see organizations trying to drive vendors to make changes
or where we think inevitably the changes will need to occur. There are
two primary ways we are collecting information to make these observa-
tions: through tracking marketplace investments and direct involvement,
witnessing organizational maturity with various GRC solutions through
our exposure to GRC projects and client issues.
From a growth perspective, by many analyst accounts the GRC tech-
nology marketplace is doing very well. While there are many analysts
now providing revenue projects for the GRC technology marketplace,
conservative estimates place the revenue growth to be 9–14%, with rev-
enue climbing to reach upwards of $30 billion by 2020 (depending on
how the analyst is defining which solutions make up the GRC market-
place). This type of growth tends to drive a very active vendor mar-
ketplace, with mergers and acquisitions and investment into expanding
capabilities.
GRC vendor mergers and acquisitions have been very active over the
years as companies attempt to expand market reach and broaden their
technical capabilities. Some notable acquisitions include the following:
– IBM’s acquisition of OpenPages, Algorythmics, and BigFix;
– New Mountain Capital’s acquisition of ACA Compliance Group;
– First American Financial’s acquisition of Interthinx, Inc.
– Wipro’s acquisition of Opus CMC;
– Goldman Sachs’ investment into MetricStream;
– Wolters Kluwer acquiring Effacts Legal Management software,
Datacert, SureTax, Learner’s Digest, and LexisNexis legal busi-
ness in Poland;
– EMC acquiring Archer Technologies, Netwitness, and Symplified
Technology
384 J. RECOR AND H. XU
tions and processing into a platform that may not be capable of support.
We have broken down the GRC platform challenges into the following
categories:
– Scalability. Many of the GRC technology products were designed
to be used to solve specific problems, and have increased capabilities
over time. Within this topic there can be challenges with process-
ing large amounts of users and associated tasks. While some of the
enterprise class tools such as ERP have added GRC capabilities and
do not necessarily face this problem, many of the platforms that
grew up addressing IT GRC or other specific GRC-related chal-
lenges may have enterprise limitations around users and processing.
– Solution coverage. There are two components to this weakness.
First, traditional GRC platforms may not be ready to support
enterprise level demands for solutions as they have not matured
enough yet to provide all the necessary coverage. Secondly, the
GRC platforms that do provide a good deal of the capabilities
required to support the enterprise may not be architected to sup-
port the integration requirements that GRC drives.
– Data storage. Generally speaking, GRC technology platforms
were not designed to be data warehouses or configuration man-
agement databases (CMDB). GRC vendors have provided some
GRC TECHNOLOGY FUNDAMENTALS 387
Functional Requirements
Policy Management
• Structure for lifecycle process
GRC TECHNOLOGY FUNDAMENTALS 389
Risk Management
• Supports risk assessment process and associated workflow
• Supports alternate risk measure approaches
– Qualitative/quantitative/hybrid risk measurement
– Ability to control level of control user is given over risk calculation
parameters and weights
– Support for threat/vulnerability measurement approach to risk
assessme nt
• Supports standards-based dollar quantification of risk
– Single occurrence loss expectancy (SLE)
– Annual average loss expectancy (ALE)
– Annual rate of occurrence (ARO)
– Supports standards risk assessment methodologies and algorithms
– Supports custom risk assessment methodologies and algorithms
– Supports survey campaigns based on date or other automated
milestones
390 J. RECOR AND H. XU
Compliance Management
• Support for using survey-based and automated-testing results and
data from third-party tools
• Supports calculating compliance scores for each regulation
• Supports aggregation of scores for various regulations
• Supports communication to testers and stakeholders of their tasks
through email notifications
• Support for reporting and dashboard capabilities
Workflow
• Supports document management capabilities
• Supports unlimited workflow steps
GRC TECHNOLOGY FUNDAMENTALS 391
Reporting and Dashboards
• Predefined report templates available to support audits and major
regulations and standards
– PCI-DSS
– Basel
– GLBA
– HIPAA
• Supports custom reports
– Supports generation of reports on schedule and on demand
– Supports exporting data to external sources
• Supports standard data import and export mechanisms
Non-Functional Requirements
System Integration
• Supports single sign on log on credentials
• Support for external systems
– Databases
– Helpdesk/ticketing/incident management
– Document management systems
– Email systems
• Asset Management
– Ability to integrate with CMDB support systems
• Hosting Systems
– Supports integrating with hosting systems
• UNIX
• Mainframes
• Windows
• SQL Server
• Oracle Server
– Ability to provide hosting capabilities if needed
• Language capabilities
– Native language support for English as well as other countries as
needed
392 J. RECOR AND H. XU
General Requirements
• Costs
– License cost
– Annual maintenance cost
– Training/other acquisition costs
• Revenues
– Past quarter
– Past year
• Implementation Services
– Current projects and staffing bench
– Training venues/options
– Relationships with external implementation and consulting partners
• Security
– Protection of administrator account
– Supports role-based user privileges
– Supports the delegation of administrative functions
– A log/audit trail of administrative activities/configuration changes
is kept
– Supports back-up and restore functions
• Documentation
– Software comes with appropriate documentation/training materials
– Additional help is integrated into the system
Risk Management: Challenge and
Future Directions
Quantitative Finance in the Post Crisis
Financial Environment
Kevin D. Oden
Introduction
Mathematics is a disciplined approach to solving problems. As a byproduct
of that approach, beautiful truths are often discovered, problems some-
times solved, and these beautiful truths and sometimes solved problems
usually lead to even more interesting problems.
Finance has historically been rich in problems to solve:
• portfolio optimization;
• investment strategy;
• performance analysis;
• estimating fair value;
• price prediction.
The views expressed in this document are the author’s and do not reflect Wells
Fargo’s opinion or recommendations.
• mean-variance analysis;
• factor modeling and arbitrage pricing theory;
• performance ratios;
• risk neutral pricing;
• and more recently, market microstructure theory.
Credit has always been one of the largest exposures for commercial banks.
And even prior to the financial crisis derivative traders at commercial banks
realized that all derivative counterparties were not created equally from a
credit perspective. The credit quality of the derivative counterparty should
be taken into account either through collateral arrangements or through
reserving a portion of expected profit on transactions with counterparties.
These adjustments to the value of the contract were made to compensate
for the possibility the counterparty defaulted before expiry of the trans-
action and the costs associated with replacing or offsetting the risk. The
QUANTITATIVE FINANCE IN THE POST CRISIS FINANCIAL ENVIRONMENT 397
notion of adjusting the fair value of a derivative to account for the credit
quality of a counterparty became memorialized as an accounting standard
in the USA in 2006 with the FASB 157 and in the European union in IAS
39.1 These accounting standards require credit risk of both participants in
the derivative transaction to be reflected in the fair value of the derivative.
These adjustments are known as credit valuation adjustment (CVA) and
debt valuation adjustment (DVA).
The crisis also revealed that a longstanding assumption that the cost of
funding a collateralized trade was roughly the same as an uncollateralized
trade could be dramatically wrong as the benchmark for funding most
commercial banks (LIBOR) widened to historic levels versus the cost of
carrying or the benefit of lending collateral which is typically benchmarked
at the overnight index swap rate (OIS) in the USA, or the sterling over-
night index average (SONIA) in the UK. This newly recognized risk led to
a new adjustment to the fair value of a derivative known as a funding valu-
ation adjustment (FVA). This adjustment could be a cost, if the derivative
is an asset that needs to be funded or a benefit if the derivative is a liability.
Similarly, each derivative instrument the bank transacts attracts one or
more capital charges. Typically, there will be a charge for the risk of loss
associated with market movements (market risk capital charge) and there
will be a charge associated with the potential for counterparty default (a
counterparty credit capital charge). This capital must be held for the life
of the transaction and will vary over the life of the transaction depending
on the credit quality of the counterparty, the market, and the remaining
maturity of the transaction. Clearly, the level of expected capital that
must be held throughout the life of the transaction impacts the profit-
ability of the trade and should be reflected as an “adjustment” to the
fair value of the trade. This adjustment is known as a capital valuation
adjustment or KVA.
In summary we have the following adjustments with some of their key
properties:
In particular, if the buyer and the seller of the derivative agree on their
respective risk of default (or credit spread) than there is an arbitrage free
“price” agreed on by both parties, V :
V = VR − CVA + DVA
Operational Risk
The original Basel Accord set aside capital requirements for credit and
market risk. Losses associated with operational failures or the legal fall-
out that followed were mostly associated with failures in credit processes
or market risk management lapses. But over time it became increasingly
clear that many losses were not breakdowns in credit or market risk man-
agement; but failures in processes were clear and distinct from these two
disciplines and could ultimately result in significant credit or market risk
losses or even more punitive legal claims. Therefore, the second Basel
Accord (Basel II) [2] clearly defined operational risk as “the risk of loss
resulting from inadequate or failed internal processes, people and systems,
or external events” and prescribed three approaches to calculate capital for
this risk.
The financial crisis highlighted how pervasive and impactful the
poor management of operational risk could be to financial institutions,
in particular, commercial banks with lapses in sound mortgage origina-
tion practices to errors in home foreclosures. Some of the costlier oper-
ational risk losses before and after the financial crisis are listed below
[13, 15]:
The increased losses leading to and immediately after the financial crisis
increased pressure to improve the models assessing operational risk capital
and more broadly enhance and standardize the practices related to opera-
tional risk management. On the capital front, Basel III [5] provided three
approaches for calculating operational risk capital:
We focus here on the last approach, because it gives the industry the
most latitude to produce modeling techniques that address the idiosyn-
cratic nature of operational risk at the particular commercial bank. This
latitude also means a wide range of practice has developed around the
calculation under the approach and the realization by regulators and prac-
titioners alike that the problem of quantitatively assessing capital for oper-
ational risk is a difficult problem still in its infancy.
The regulatory requirements for calculating operational risk capital
under the AMA framework (Basel III [3], [4]) are essentially the following:
402 K.D. ODEN
The challenge then becomes estimating the CDF for L in order to find
quantiles,
FL ( y ) = Prob ( L ≤ y ) .
institutions and using methods from the theory of extreme value theory
(EVT) still make the task practically difficult.
As we have already noted, most institutions do not have enough oper-
ational loss data (internal data) to estimate the extreme tail of the loss
distribution reliably. Even if an institution does have enough data for a
particular type of loss, loss data is inherently non-homogenous and the
tails for some types of losses (e.g. employee fraud) may have different dis-
tributional characteristics than the tails for other types of losses (e.g. sales
practice failures). These groupings of losses are typically known as opera-
tional risk categories (ORC). So in practice banks must estimate multiple
losses, Li where i ranges over all ORCs. In this case, data becomes even
sparser.
If external data is brought in to augment internal data (which it must
by the rule), how do we scale the external data to fit the risk characteristics
of the firm being modeled? For example, using external loss data for credit
card fraud from an organization that has multiples of exposure to the
modeled company without some kind of scaling of the data would lead to
misleading and outsized capital related to credit card fraud.
Besides the challenge of estimating the distribution for each Li there is
the task of modeling the co-dependence of the frequency and severity of
each loss in each ORC along with modeling the co-dependence structure
between the ORC groupings.
Given the complications both theoretical and practical outlined above,
many market practitioners and recent regulatory agencies have questioned
the feasibility of a modeled capital charge for operational risk and whether
a more standard and possibly punitive charge should be levied. This
approach too has its disadvantages, as simply adding capital without any
relationship to the risk it is meant to cover is not only bad for the industry
but in fact poor risk management. So regardless of the direction the Basel
Committee ultimately takes, the industry will need to tackle this difficult
problem to better manage this risk.
Fair lending risk analysis aims to monitor compliance with the fair
lending laws and statutes, in particular the Equal Credit Opportunity Act
(ECOA) and the Fair Housing Act (FaHA). But the origins of fair lend-
ing analysis go back to at least 1991 when data collected under the Home
Mortgage Disclosure Act (HMDA) was first released [24].
In 1975 Congress required, through HMDA, loan originators to main-
tain data on mortgage originations, mainly to monitor the geography of
these originations. In 1989 after further amendments to HMDA, require-
ments were included to retain the race and ethnicity of loan applicants
along with denial rates. When this information was released in 1991, the
results not only fueled outrage in some circles because of the disparate
loan denial rates between blacks, Hispanics and whites, but instigated the
Federal Reserve Board of Boston to perform a detailed statistical analysis
of the data in order to draw conclusions about discriminatory practices in
mortgage lending. This now famous, heavily scrutinized, and often criti-
cized study is popularly known as the “Boston Fed Study” (see [24] for
references) and in many ways laid the foundation for all fair lending analy-
sis that followed.
However, fair lending analysis now extends to all forms of credit, rang-
ing from auto loans to credit cards; from home improvement loans to
home equity lines. Beyond the origination of credit, there are requirements
to identify abusive practices, like predatory lending (e.g. NINJA loans3)
and unfair foreclosures. And, in the wake of the financial crisis, the Dodd-
Frank Wall Street Reform and Consumer Protection Act of 2010 created
the Consumer Financial Protection Bureau (CFPB), whose primary task is
to protect consumers by carrying out federal consumer financial laws. In
particular, as it relates to fair lending, the CFPB is the primary regulator
that attempts to detect and enforce remediation related to unfair lending
practices. Typically, these policies look to detect discriminatory treatment
of persons in protected classes. Though the definition of protected class
varies by jurisdiction and regulation, most laws provide protection based
on race, color, religion, gender, national origin, and sexual orientation.
Fair lending risk can be broken down into two broad types:
{L,C,A} = {X1 , X 2 , X3 ,…, X n } ,
π ( L,C,A ) = π ( X1 , X 2 , X 3 , …, X n ) .
approve π > π * ,
* (1)
deny π ≤ π .
The set-up in Eq. (1) lends itself nicely to the credit scoring analy-
sis typically performed using logit, probit, or even ordinary least squares
(OLS) analysis. However, one of the drawbacks of the problem as stated
is due to historic overt discriminatory practices (redlining, for example, in
the USA). Therefore, any historical calibration of the performance model
would immediately suffer from an omission-in-variables (OIV) bias. To
account for this we include the protected class variables, P1, P2,…, PM and
406 K.D. ODEN
approve π + D > π * ,
*
deny π + D ≤ π .
n m n
π = −D + ∑ β k X k + ε = α − ∑ λk Pk + ∑ β k X k + ε . (2)
k =1 k =1 k =1
We note first that Eq. (2) implies the profit equation takes into account
characteristics of the protected classes, for example, race, gender, and so
on. From a profit perspective this may be true and in fact necessary due to
historical discriminatory practices leading to inequities in education, com-
pensation, or even job retention. In fact, current discriminatory practices
may exist which will impact the ability of a protected class to repay a loan.
However, under ECOA and FaHA, banks are not allowed to use protected
class information in their decision processes related to loan origination.
This would be disparate treatment. Therefore, in assessing the approval
processes for adherence to fair-lending practices, the regression Eq. (2) is
used to assess whether the coefficients of the protected class characteristics
are significantly different from zero.
The approach just outlined is now typically used by commercial banks
and regulatory agencies to identify disparate impact or disparate treatment
in lending practices. But there are a number of practical and theoretical
difficulties with the approach. As noted earlier, there may be any number
of relevant variables that determine the credit quality of the borrower. If
those variables are omitted in the regression equation, then their impact
may bias one or more of the protected class coefficients. This is one type
of OIV problem. There are more subtle types of OIV problems, such
as unobservable variables that influence the outcome of the lending pro-
cess that are difficult to assess, whose omission could lead to correlation
between the error term and the outcome variable (approval or denial),
leading to coefficient bias.
QUANTITATIVE FINANCE IN THE POST CRISIS FINANCIAL ENVIRONMENT 407
• money laundering;
• fraud;
• tax avoidance.
m m
w (α ) = ∑ α k − ∑ α k α j γ k γ j k ( xk ,x j ) (3)
k =1 j , k =1
subject to
C
0 ≤ αk ≤ , ( k = 1,..,m ) , (4)
m
410 K.D. ODEN
m
∑ α k γ k = 0, (5)
k =1
where xk, k = 1,2,..m, are the training data describing the credit card trans-
actions5 which we collectively denote by X, k is a kernel function mapping
X×X into an m dimensional space H. C is the cost parameter and repre-
sents a penalty for misclassifying the data while γk are the classification
labels for the data points (i.e. one or zero, depending on whether xk is a
fraudulent or legitimate transaction).
The solution to (3), (4), and (5) provides the (dual) classification function:
m
∑ α k γ k k ( xk ,x ) + b = 0. (6)
k =1
There are several aspects of this problem which are practically and theo-
retically challenging. First, due to the high dimensionality the solution of
the programming problem is computationally difficult, though there are
iterative approaches, see [19] for example, that can scale large problems for
SVM implementations. Second, the choice of the kernel function and the
cost parameter can greatly influence the outcome of the classification func-
tion and its effectiveness. The cost parameter is often difficult to estimate
and only experimenting with choices of k and reviewing results is currently
available. Last, and probably most pressing, there is no clear-cut best mea-
sure of model performance. The industry has used the receiver operating
characteristic (ROC) and the area under the ROC curve (AUC) as well as
functions of AUC, like the Gini coefficient (see [7] for a fuller discussion),
but each has weaknesses when encountering imbalanced data; that is, data
where the occurrence of one class, for example fraud, has a very low prob-
ability of occurring. A frequently used example (see [8] for instance) to
describe this difficulty as it applies to accuracy as performance measure is
the following: Suppose in our credit card example, the probability of cor-
rectly detecting legitimate activity as legitimate is 99 and the probability
100
correctly detecting fraudulent activity as fraudulent is 99 . This would
100
appear to be a very accurate detection system. However, now suppose we
have a data imbalance. For example, we know that one in 1000 records
are fraudulent. Then on an average in a sample of 100 records flagged
QUANTITATIVE FINANCE IN THE POST CRISIS FINANCIAL ENVIRONMENT 411
Model Risk
Models are used pervasively throughout all commercial banks. In fact,
this chapter has discussed just a small subset of the types of models
used daily in most banks throughout the world. Furthermore, with the
ability to store and manipulate ever larger data sets, more computing
power and the increased packaging of models into easy to use soft-
ware, the upward trend in model use in the banking industry is likely
to continue unabated. But with model use come model risks. This risk
was highlighted with the notable model risk management failures prior
to and during the financial crisis. The pre-crisis pricing of CDOs using
Gaussian copula models (see [18] for an in-depth discussion) or the
models used by rating agencies to rate structured products are just two
of many examples.
Though not the driving factor behind the financial crisis, the regu-
latory agencies realized that poor model risk management was likely a
contributing factor to the crisis and that guiding principles for the proper
management of model risk were needed. This framework was provided
in the form of a joint agency bulletin [20], known typically as “SR-11-7”
or “2011-12” in the banking industry.6 We shall simply refer to it as the
agency guidance.
The agency guidance defined a model as a “quantitative method, sys-
tem, or approach that applies statistical, economic, financial, or mathe-
matical theories, techniques, and assumptions to process input data into
quantitative estimates”. Furthermore, it stated “that a model consists
of three components: an information input component, which delivers
assumptions and data to the model; a processing component, which trans-
forms inputs into estimates; and a reporting component, which translates
the estimates into useful business information”. The document goes on
412 K.D. ODEN
to define model risk as “the potential for adverse consequences from deci-
sions based on incorrect or misused model outputs or reports”.
The regulatory definition of model, for all practical purposes, expanded
the scope of model risk. Early attempts at defining and measuring model
risk primarily focused on the “transformation component” of the model
(the “quant stuff”) and largely ignored the input and output components.
Moreover, most of the model risk work pre-crisis focused on risks associ-
ated with derivative pricing models ([11, 12, 21]), though the largest risk
in most commercial banks comes from credit and its approval and ongoing
monitoring processes, which are increasingly model driven.
Fundamentally, model risk can be broken down into three categories—
inherent, residual, and aggregate risks. These risks can be described as
follows:
• Inherent Risk
–– All models are simplifications of real-world phenomena.
–– This simplification process leads to risk of omitting relevant fea-
tures of the process one wishes to model.
–– Some inherent risks can be mitigated or reduced while others can-
not or may not even be known at the time of model development.
• Residual Risk
–– The risk that remains after mitigating all known inherent risks that
can be managed or are deemed cost effective to manage.
–– Accepted risk for using a particular model.
• Aggregate Risk
–– The risk to the firm from all model residual risks.
–– Not simply an additive concept as there will likely be complex
dependencies between models either directly or through their
residual risks.
Within this framework, most model risk work has focused on analyzing
inherent risk and has attempted to measure model misspecification within
a well-defined class of models in order to make the problem tractable.
Bayesian model averaging is one such approach that has been explored
extensively ([15, 22]). Cont [11] refers to this type of model misspecifica-
tion risk as “model uncertainty” and asks the fundamental questions relate
to it:
QUANTITATIVE FINANCE IN THE POST CRISIS FINANCIAL ENVIRONMENT 413
µ = sup∈ E [V ] − inf∈ E [V ]
where expectation is with respect to the risk-neutral measure. Cont goes
on to show that μℚ is a coherent measure of model uncertainty7 and for a
fixed model defines the model risk ratio
µ (V )
MR (V ) = .
E [V ]
subject to
414 K.D. ODEN
f
D ( m ) = E [ m log m ] ≤ η , where m ( X ) =
f
N N
E p [V ( X ) ] = ∑ piV ( X i ) = ∑ piVi .
i =1 i =1
min D ( p||p0 )
p
subject to
N
∑ piV ( X i ) = V (1 + α ) ,
i =1
N
∑ pi gij = C j , j = 1,.., M ,
i =1
N
∑ pi = 1.
i =1
QUANTITATIVE FINANCE IN THE POST CRISIS FINANCIAL ENVIRONMENT 415
Here D(p| | p0) is the Hellinger distance between p and the target
model p0, gij is the payoff of the jth calibration instrument Cj under the ith
scenario Xi, and α is, initially, some fixed small increment.
Finally, they use the fact that the square root vector, ( p1 , p2 , …, pN )=
P for our probabilities resides on a unit hyper-sphere so they fix a small
angle φ* (say = .01) and find two models p− and p+ corresponding to small
increments α < 0 and α > 0. These two models lie in an “01” normalized
distance of the target model in the following sense:
Model 01 = E p [V ] − E p [V ] ,
+ −
subject to
P + ,P − = cos (ϕ * ) .
Conclusion
We have given a flavor of the types of pressing quantitative problems fac-
ing the commercial banking industry in the post-crisis financial environ-
ment. This list is far from exhaustive and in the limited space available
we could only scratch the surface of these nuanced and complex issues.
There are many other quantitative problems facing the industry which
are equally rich in their complexity and importance and this fact leads the
author to believe that the golden age of quantitative finance is not in its
twilight but stretches before us on the horizon.
Notes
1. Financial Accounting Standards (FASB) 157 in the USA http://www.fasb.
org/summary/stsum157.shtml . International Accounting Standards (IAS)
39 http://ec.europa.eu/internal_market/accounting/docs/consolidated/
ias39_en.pdf.
2. To be exact, the uncollateralized derivative asset which is hedged with a col-
lateralized derivative instrument requires funding as the hedging liability
will require collateral funding. Conversely, an uncollateralized derivative
liability will benefit from collateral inflows from the hedging asset.
3. NINJA loans are lightly documented loans which have been viewed as pred-
atory. The NINJA acronym comes from No Income, No Jobs no Assets.
4. https://www.fincen.gov/statutes_regs/bsa/.
5. More accurately, xk are the derived attributes of the training data. For each
credit card transaction for instance, a set of attributes will be aggregated like
the number of transactions at a particular location or the average size of
transactions over the last three months.
6. The Federal Reserve board-issued document is known as SR-11-7 while the
OCC document is known as 2011–12.
7. (1) Coherent in the sense that model uncertainty reduces to uncertainty in
market value (bid–ask spread), (2) a derivative that can be replicated in a
model-free way has no uncertainty, (3) diversification and hedging with
traded options decrease uncertainty.
8. DV01 = Dollar Value of a basis point decrease in “interest rates”.
9. Abasto and Kust actually perform the minimization relative to the equal
weight probability measure pi = 1/N for all i in the Hellinger distance and
demonstrate that the end results are identical.
QUANTITATIVE FINANCE IN THE POST CRISIS FINANCIAL ENVIRONMENT 417
References
1. Abasto, Damian and Kust, Mark P., “Model01: Quantifying the Risk of
Incremental Model Changes”, September 5, 2014. Available at SSRN: http://
ssrn.com/abstract=2492256 or 10.2139/ssrn.2492256.
2. Basel Committee on Banking Supervision (BCBS), “International Convergence
of Capital Measurement and Capital Standards A Revised Framework
Comprehensive Version”, June 2006 http://www.bis.org/publ/bcbs128.
pdf.
3. Basel Committee on Banking Supervision (BCBS), “Observed range of prac-
tice in key elements of Advanced Measurement Approaches (AMA)”, July
2009 http://www.bis.org/publ/bcbs160b.pdf.
4. Basel Committee on Banking Supervision (BCBS), “Operational Risk –
Supervisory Guidelines for the Advanced Measurement Approaches”, June
2011 http://www.bis.org/publ/bcbs196.pdf.
5. Basel Committee on Banking Supervision (BCBS), “Basel III: A Global
Regulatory Framework for more Resilient Banks and Banking Systems
(Revised)”, June 2011 http://www.bis.org/publ/bcbs189.pdf.
6. Berkane, Maia. Wells Fargo & Co., Private Communication, March 2015.
7. Bhattacharyya, Siddhartha., Jah, Sanjeev., Tharakunnel, Kurian., Westland,
J. Christopher., “Data Mining for Credit Card Fraud: A Comparative Study”,
Decision Support Systems, 50(3), February 2011, 602–613.
8. Bolton, Richard J., and David J. Hand, “Statistical Fraud Detection: A
Review”, Statistical Science, 17(3), 2002, 235–249.
9. Chan, P.K., Fan, W., Prodromidis, A.L., Stolfo, S.J., “Distributed Data Mining
in Credit Card Fraud Detection”, Data Mining, (November/December),
1999, 67–74.
10. Chen, R.C., Chen, T.S., Lin, C.C., “A New Binary Support Vector System for
Increasing Detection Rate of Credit Card Fraud”, International Journal of
Pattern Recognition, 20(2), (2006), 227–239.
11. Cont, Rama. “Model Uncertainty and Its Impact on the Pricing of Derivative
Instruments”, Mathematical Finance, 16, July 2006.
12. Derman, E, “Model Risk”, Risk, 9(5), 139–145, 1996
13. Glasserman, P., and Xu, X. “Robust Risk Measurement and Model Risk”,
Journal of Quantitative Finance, 2013.
14. Grocer, Stephen, “A List of the Biggest Bank Settlements.” The Walls Street
Journal, 23 June 2014.
15. Hoeting, J., Madigan, A.D, Raftery, A. E, Volinsky, C. T., “ Bayesian Model
Averaging: A Tutorial”, Statistical Science 14(4), 382–417.
16. Jorion, Philippe. GARP (Global Association of Risk Professionals) (2009-06-
08). Financial Risk Manager Handbook (Wiley Finance) Wiley. Kindle Edition.
418 K.D. ODEN
17. Kenyon, Chris., Stamm, Roland., “Discounting, Libor, CVA and Funding:
Interest Rate and Credit Pricing”. Houndmills, Basingstoke: Palgrave
Macmillan, 2012. Print.
18. Morini, Massimo., “Understanding and Managing Model Risk: A Practical
Guide for Quants, Traders and Validators”, Hoboken: Wiley 2011
19. Ngai, W.T., Hu, Yong., Wong, Y. H., Chen, Yijun., Sun, Xin. “The Application
of Data Mining Techniques in Financial Fraud Detection: A Classification
Framework and an Academic Review of Literature.” Decision Support Systems
50, 3 (February 2011), 559–569.
20. OCC Bulletin 2011-12/Federal Reserve Bulletin SR 11-7, “Supervisory
Guidance on Model Risk Management”, April 4, 2011. http://www.occ.
gov/news-issuances/bulletins/2011/bulletin-2011-12a.pdf.
21. Platt, J.C., “Fast Training of Support Vector Machines Using Sequential
Minimal Optimization”, in: B. Scholkopf, C.J.C. Burges, A.J. Smola (Eds.),
Advances in Kernel Methods—Support Vector Learning, MIT Press, Cambridge,
MA, 1998, 185–208.
22. Raftery, A. E., “Bayesian Model Selection in Structural Equation Models”, in
Testing Structural Equation Models, K. Bollen and J. Long, eds. Newbury
Park, CA: Sage, 1993 163–180.
23. Rebonato, R, “Theory and Practice of Model Risk Management”, in Modern
Risk Management: A History; RiskWaters Group, 2003. 223–248.
24. Ross, S. L., Yinger, J., “The Color of Credit: Mortgage Discrimination, Research
Methodology, and Fair-Lending Enforcement”. Cambridge, Mass: MIT Press,
2002.
Index1
Basel Accords, 6, 13, 36–7, 124, 341, bottom-up approach, vii, 232, 248
342, 400 business as usual (BAU), 107, 109,
Basel Committee on Banking 117, 173, 279, 280, 297, 300,
Supervision (BCBS), 3–4, 6, 8, 366
11–13, 25, 27, 29, 30, 31n1, business continuity planning (BCP),
32n2–32n4, 32n7, 33n12, 359, 360, 388
36–40, 47–50, 50n1–n4, 50n6, business environment and internal
87, 103, 110, 122, 126, 128, control factors (BEICFs), 124,
133n1, 133n2, 204, 280, 341 125, 341, 402
Basel I, 5, 6, 13, 30, 31, 36–9, 51n1, business performance management
157 (BPM), 313, 384
Basel II, 3, 5–8, 10–13, 16–18, 31,
31n1, 36, 38–9, 51n3, 51n4, 56,
65, 66, 73, 86, 87, 122, 124, C
125, 127, 130, 133, 202, 207, call option, 11, 12, 263, 284
226, 254, 289, 297, 338–40, 400 capex, 78
Basel II 1/2, 31n1 capital, v, vi, vii, xxiii, xxvi, 3, 8–12,
Basel II A-IRB method, 226 35, 56, 77, 90, 105, 122, 156,
Basel III, v, vi, 3–33, 36, 73, 74n3, 168, 202, 231–48, 281, 292,
76, 87, 88, 95, 109, 110, 118n1, 342, 397
118n2, 118n5, 207, 401 capital adequacy requirement, 6,
basic indicator approach (BIA), 14, 12–20
122, 125, 401 The Capital and Loss Assessment
basis risk, 40 under Stress Scenarios (CLASS)
basket credit derivatives, Model, 245, 249n6
Bayes Improved Surname Geocoding capital base, 4, 5, 8, 22, 95
(BISG), 138 capital buffer
Bayes’ theorem, 142, 275 capital charge, 14, 16–19, 23
benchmarking, 128, 184, 186, 190, capital conservation buffer, 7, 20–2,
193, 237, 238 24
benefit pension fund, 10 capital cushion, 37, 231
Bernoulli event, 210 capital management, 31, 117, 133,
bid-offer spread, 105 204
big data, 132, 384, 385 capital planning processes, 20, 227,
binomial distribution, 128, 144, 210, 232, 343
211 capital ratio
Black-Scholes formula, 285 capital requirement ratio, 16
board of directors, 123, 176, 181, cash flow, 7, 9, 48, 55, 85, 92, 97,
315, 316, 331, 336 110
bonds, 40, 44, 83, 84, 111, 159 cash flow hedge, 9
book value, 13, 89, 91 CCDS, 398
bootstrap, 158 CCP IA, 96
INDEX 421
CDO, 43, 59, 76, 411 confidence interval, 37, 64, 235,
Central Clearing Counterparties, 77 237, 241, 246
central clearing parties (CCPs), 18, 19, confidence level, 45, 47, 57, 58, 63,
77, 95–8 64, 67, 127, 128, 130, 161, 162,
central counterparty clearing, 73 289, 290, 295
central limit theorem, 289 configuration management databases
central liquidity book, 106 (CMDB), 360, 370, 371, 386,
cheapest to deliver (CTD), 79 390, 391
chief risk officer (CRO), 174 confounding bias, 139
chi-squared distribution, 270 conservatorship, 155, 165
Cholesky decomposition, 267 Consumer Credit Protection
close-out risk, 85, 87, 96 Act, 137
CME, 77, 95, 97–9 Consumer Financial Protection
COBIT, 363, 368, 370, 390 Bureau (CFPB), 136, 138,
coherent measure of risk, 291 356, 404
collateralizations, 73 Consumer Protection Act (CPA),
commercial banks, v, vi, vii, viii, 4, xxv, 29, 361, 404
29, 31, 73, 104, 107, 115–16, contingent capital, 27, 31, 32n8
118, 123, 125, 135, 203, 282, copula(s), 116, 293–5, 411
283, 396–8, 400–2, 405–7, corporate bond spreads (BBB), 233,
411, 412, 416 234, 238
commitments, 7, 16 correlation risk, 40, 86
The Committee of Sponsoring correlation trading activities, 39
Organizations of the Treadway correlation trading portfolio, 19
Commission (COSO), 336, 337 countercyclical capital buffer, 7,
commodity future prices, 59 22–4, 32n7
commodity prices, 35 counterparty credit, xxiii, xxvi, 60, 69,
common equity, 4, 5, 8, 10, 31 75, 87, 397
common equity Tier 1 capital (CET counterparty risk, vi, xxii, xxvi, 7, 18,
1), 6, 8–11, 15, 20–4, 27, 55, 56, 58, 60, 62, 73,
32n8, 244 84, 93, 95
common stock, 30 counterparty credit risk
Common Vulnerability Scoring System (CCR), v, vi, 12, 15, 18–19,
(CVSS), 350 55–74, 84, 105
compliance management systems covariance matrix, 246, 247, 267,
(CMS), 319, 320 268, 273, 275
comprehensive capital analysis and CPI, 203, 213, 214, 218
review (CCAR), v, 43, 95, 118n3, credit conversion factor (CCF), 16
128, 195n5, 232, 279 credit default swaps (CDS), 40, 67,
the comprehensive risk measure 76, 79, 82, 83, 85–8, 90, 92–5,
(CRM), 41–3, 161, 162 260, 398
422 INDEX
credit loss rates, 155, 156, 158, 159, disparate impact, 136, 139, 404–7
163, 165n12 dividends, 11, 12, 20, 21, 44
credit risk, vi, xxii, xxvi, 3, 10, 12, 13, Dodd-Frank Act stress testing
15, 23, 36, 46, 55, 57, 65, 76–8, (DFAST), 43, 128, 176, 195n5,
86–8, 107, 116, 117, 118n6, 201, 224
121, 132, 158, 169, 188, 194n2, Dodd-Frank Wall Street Reform, 29,
201, 202, 204, 207, 211, 404
227, 281, 283, 338, 397, domestic systemically important
398, 400, 413 financial institutions (D-SIBs), 28
credit spreads, 35, 59, 67, 82, 85, 89, dummy variable, 147
90, 260, 261, 283, 399, 400 DV01, 91, 99, 414
Credit Suisse Group, 32n8, 408
credit support annexes (CSA), 77–9,
83, 84, 88, 95, 96, 98 E
credit valuation adjustment (CVA), vi, economic capital, 95, 128, 290–2
xxi, xxii, 18–19, 58, 76–88, effective EPE, 65–6
93, 397–400 effective expected exposure (EEE), 65
customer due diligence (CDD), 407 effective expected positive exposure
(EEPE), 18
effective fed fund rate (OIS), 77, 91,
D 93, 397
database, vii, 131, 133, 175, 182, efficient frontier, 292
228n7, 305–7, 327, 342, 360, eligible retained income, 21
370, 371, 373, 391 embedded derivative, 89
data mapping, 339 emerging market, 86
data silos, 330 enterprise GRC (eGRC), 310, 313,
debt specific risk (DSR) model, 43 327, 380
debt-to-equity ratio, 156, 158, 163 enterprise resource planning (ERP)
debt valuation adjustment (DVA), vi, vendors, xxv, 312, 323,
75, 88–94, 397–400 335, 386
default correlation, 152, 204–11, enterprise risk management (ERM),
228n9, 287–8 123, 171–2, 178, 313, 327–31,
default rate, 77, 156, 209 335–40, 387–8, v
default risk, 18, 38, 39, 42–4, 77, 79, Equal Credit Opportunity Act
93, 94, 99, 227n3, 283, 399 (ECOA), 136, 137, 404, 406
deferred tax assets (DTAs), 9, 10 equity prices, 35
deferred tax liabilities (DTLs), 9 European option, 80, 83, 414
deleveraging process, 4, 15 European Union (EU), 110, 211,
delta-gamma approximation, 42, 216, 352, 397
258–60, 262, 263, 268 EURUSD, 99
demand deposit account (DDA), 115 event risk, 43, 46
direct credit substitutes, 16 expectation-maximization (E-M)
discount factor, 85 algorithm, 275–8
INDEX 423
expected exposure (EE), 65, 72 194, 231, 280, 281, 294, 297,
expected positive exposure (EPE), 17, 302, 396, 400, 401, 404, 411
18, 57, 61, 64, 66, 80 financial distress, 26
expected return, 46 Financial Institutions Risk Scenario
expected shortfall (ES) Trends (FIRST), 342
conditional VaR, 46 financial planning and analysis
exposure at default (EAD), 14, 16, (FP&A), 169
17, 202, 224 financial stability, 28, 32n10, 36,
extreme value theory (EVT) 245, 377
Financial Stability Board (FSB), 3, 4,
25, 28–9, 32n9, v
F Flash Crash, 105
Fair Housing Act (FHA), 136, 137, foreign-exchange rates, 35
404 Forrester Research, 310–11
fair lending risk, v, vi, viii, 135–8, 396, Freddie Mac, 55, 154–60, 164n4,
403–7 165n5
fair value option (FVO), 75, 89 Front Office, xxi, 44, 48, 60, 73
Fannie Mae, 55, 154–60, 164n4, Fundamental Review of the Trading
165n5 Book (FRTB), 47
Federal Deposit Insurance funding costs, 75, 92, 93, 95, 98,
Corporation (FDIC), 22, 29–31 170, 399
Federal Financial Institutions funding valuation adjustment (FVA),
Examination Council (FFIEC), vi, 91–5, 397, 398
319, 320, 356, 359, 367 funds transfer pricing (“FTP”) rate,
Federal Housing Finance Agency 91, 92
(FHFA), 155, 401 FX, xxi, 44, 59, 82, 83, 86, 89,
Federal Information Security 96, 99
Management Act (FISMA), 343,
344
Federal Reserve (the Fed), 21, 29, G
118n3, 153, 168, 195n5, 224, G-10. See Group of Ten
226, 227, 232, 295n2, 300, 404, Gartner, 309, 311–12
416n6 Gartner Magic Quadrant, 311, 376
Fico score, 139 Gauss-Hermite quadrature, 211
Financial Accounting Standards Board GDP growth rate, 212, 213
(FASB), 76, 90, 100n6, 286, generalized Linear Model
397, 416n1 Estimations, 284
financial collateral, 8, 15 Generally Accepted Privacy Principles
financial crisis, v, vi, 4, 6, 8, 17, 18, (GAPP), 361
35, 39, 40, 45, 47, 50, 55, global significant financial
75–99, 103, 104, 109, 116, 122, institution (G-SFI), 4
152, 156, 160, 167, 176, 178,
424 INDEX
margin valuation adjustment (MVA), model risk, v, vii, viii, xxii, xxiii, xxiv,
vi, 75, 85, 95–8 7, 18, 59, 116, 117, 118n6, 123,
market factors, 17, 35, 57–61, 63, 64, 167–95, 237, 240–3, 246, 254,
67–9, 121, 398 396, 411–15
market microstructure theory, 396 model risk management (MRM), vii,
market price, 18, 64, 89, 94 viii, xxii, xxiii, xxiv, xxv, 18,
market risk, v, vi, xxi, xxii, xxiii, xxvi, 116, 117, 118n6, 167–95,
3, 13, 14, 35–51, 60, 61, 82, 84, 240, 246, 411
87–9, 99, 105, 107, 116, 117, model testing, 144–5, 182–3, 194
118n6, 121, 194n2, 227, 338, parallel testing, 183
397, 399, 400, 402, 413, 414 model theory, 142–9, 169,
1996 Market Risk Amendment, 38 185–6, 191
market value, 17, 37, 56, 63, 92, 254, momentums, 287, 293
284, 398, 416n7 money laundering, 407, 408
market volatility index (VIX), 233–5, Monte Carlo simulation, 45, 57–69,
238 83, 127, 144, 149, 169,
Markovian property, 296 266–79, 291, 341
mark-to-market (MTM), 18, 19, 57, MTM. See mark-to-market
58, 63, 67, 69, 76–80, 82–4, multinomial tree, 169
86, 89, 95 Mutual Termination Break
maturity, 5, 11, 12, 14, 29, 62, 63, Clauses, 79
76, 89, 97, 116, 160, 286, 289,
322, 364, 381, 383, 397
maximum likelihood estimation N
(MLE), 201, 207, 284 National Institute of Standards and
mean-variance analysis, 396 Technology (NIST), 349, 354,
merger and acquisition, 292 368, 390
migration risk, 18, 39 net income, 21, 90, 105, 107
minimal capital requirement. See net stable funding ratio (NSFR), 7,
capital adequacy requirement 32n4, 94, 104, 105, 110
minimal external loss-absorbing netting agreement, 56, 57, 73
capacity (Minimum TLAC) netting and margining algorithms, 60
, 28, 29 NFOs, 109
minimum transfer amount (MTA), 78 non-investment grade (NIG) ratings,
model building, 182 222, 223
alpha testing, 182 non-investment grade securities, 79
model development, implementation normal distribution, 45, 99, 256,
and use (MDIU), 172, 173, 175, 268–70, 289
177, 181–4 NPVs, 262
model limitation buffer, vii, xxv, 232, nth-to-default credit derivatives,
233, 235–8, 240–3, 246–8 19, 40
INDEX 427
R
P receiver operating characteristic
parabola, 263 (ROC), 410
path-dependent trades, 61 recovery rate, 260
Payment Card Industry Data Security regulatory capital. See capital adequacy
Standard (PCI DSS), 327, 343, requirement
350, 363, 367, 391 regulatory capital adequacy. See capital
Pillar 1, 3, 13, 39, 40, 202 adequacy requirement
Pillar 2, 32n2, 39 regulatory capital requirement. See
Pillar 3, 32n2, 39 capital adequacy requirement
Pillar 1 regulatory capital, 3, 13 regulatory CVA, 76, 80
428 INDEX