CH 7 Operational Risk and Resiliency NE2AJ1ZNTD
CH 7 Operational Risk and Resiliency NE2AJ1ZNTD
By AnalystPrep
1
©2023 AnalystPrep “This document is protected by International copyright laws. Reproduction and/or distribution of this document is
2
© 2014-2023 AnalystPrep.
119 - Capital Regulation Before the Global Financial Crisis 296
Solvency, Liquidity and Other Regulation After the Global
120 - 325
Financial Crisis
121 - High-level Summary of Basel III Reforms 345
122 - Basel III: Finalising Post-Crisis Reforms 354
3
© 2014-2023 AnalystPrep.
Reading 99: Introduction to Operational Risk and Resilience
After compl eti ng thi s readi ng, you shoul d be abl e to:
Describe an operational risk management framework and assess the types of risks that can
Describe the seven Basel II event risk categories and identify examples of operational risk
Explain the characteristics of operational risk exposures and operational loss events and
challenges that can arise in managing operational risk due to these characteristics.
An operational risk management framework is an approach to mitigating the risks associated with
organizational operations. It involves identifying, assessing, monitoring, and controlling risks that
could result in adverse outcomes that affect an organization’s ability to meet its goals and objectives.
An operational risk management framework should include clear processes, policies, and procedures
for identifying potential operational risks, assessing their severity, and developing strategies for
T he Basel Committee defines operational risk as “the risk of loss resulting from inadequate or failed
Operational risk encompasses a wide range of potential threats, including natural disasters, human
mistakes, inadequate procedures or technologies, cyberattacks, financial losses due to fraud or theft,
and reputational damage attributed to regulatory violations. To successfully manage these risks,
organizations must have a comprehensive approach that incorporates all aspects of business
operations. T he focus of such an approach should range from employee roles and responsibilities to
4
© 2014-2023 AnalystPrep.
the use of technology and the development of data security protocols.
Many programs that manage risks in banks take effective management of operational risk as a
fundamental element that is inherent in all banking products, systems, activities, and processes.
T herefore, sound operational risk management reflects the board's and senior management's
Operational risk can also be referred to as non-financial risk. Some banks may have NFRM (Non-
departments. T his happens because factors that are not financial in nature often influence
operational risk.
Enterprise Risk Management (ERM) is the term for the comprehensive management of all business
risks.
In the financial sector, ERM offers a structure for managing an organization's financial and non-
Outside the financial industry, operational risks may account for the majority of a company's risk
T he Basel Committee on Banking Supervision (BCBS) breaks down operational risk into seven major
T he table below summarizes the seven level 1 categories of OpRisk according to the Basel
committee.
5
© 2014-2023 AnalystPrep.
Event Category Definition
Losses due to acts of a type intended to defraud,
misappropriate property or circumvent regulations,
Internal fraud the law or company policy, excluding diversity or
discrimination events, which involve
at least one internal party
Losses due to acts of a type intended to defraud,
External fraud misappropriate property or circumvent the law,
by a third party.
Losses arising from acts inconsistent with employment,
Employment health or safety laws or agreements,
practices and
from payment of personal injury claims or
workspace safety
from diversity/discrimination events.
Clients, Losses arising from an unintentional or negligent failure
products, and to meet a professional obligation to specific clients
business (including fiduciary and suitability requirements), or
practices from the nature or design of a product.
Damage to Losses arising from loss or damage to physical assets
physical assets from natural disaster or other events.
Business disruption
Losses arising from disruption of business or system failures.
and system failures
Execution, Losses from failed transaction processing or
delivery and process management, from relations
process management with trade counterparties and vendors.
Internal fraud includes any fraudulent activity a firm's employees perpetrate. It is one of the less
frequent types of OpRisk losses. It owes its rarity to the sophisticated internal control investments
institutions have made over the years. However, cases of internal fraud still occur, and billions of
6
© 2014-2023 AnalystPrep.
Event Category Examples
Internal Fraud T ransactions not reported (intentional);
transaction type unauthorized (w/monetary loss);
mismarking of position (intentional).
External fraud includes all forms of fraud third parties or outsiders perpetrate against a firm. In
banking, good examples would be system hacking and cheque and credit card fraud. In recent years,
EPWS is more prominent in parts of the world where labor laws are either old-fashioned, or there is
7
© 2014-2023 AnalystPrep.
Event Category Examples
T his is one of the categories that have the highest numbers of loss events, particularly in the US. It
encompasses losses, for example, from disputes with clients and counterparties, regulatory fines due
T he other operational risk involves physical asset damage. T his can result from natural disasters or
external human sources (e.g., terrorism and vandalism). Only a few firms actively incur losses from
this type of risk because events in this category are usually either too small or incredibly large.
8
© 2014-2023 AnalystPrep.
Table 1.6: Examples of Events under the Damage to Physical Assets
Category
Events under the BDSF category can be quite difficult to spot. For example, a system crash almost
always comes with financial costs, but these losses would most likely be classified as EDPM. To
illustrate this, consider the derivative department of a large bank that experiences a crash at 9:00
am. T he IT department does all it can, including turning to backup plans, all in vain. T he system
On checking the status of the transactions, the bank learns that it needs to fund an extra USD 10
billion on that day. Since the markets are already closed, the bank is forced to negotiate special
conditions with its counterparties. Unfortunately, the rates at which the transactions are settled
ultimately end up being higher than the daily average. Although a BDSF event – a system failure –
occasioned this loss, it will most likely be categorized as part of the next category, Execution,
T hese are losses from failed transaction processing. Alternatively, these losses could emanate from
process management from relations with trade counterparties and vendors. Losses of this event type
are quite frequent since human error, miscommunications, and so on, can occasion them.
9
© 2014-2023 AnalystPrep.
Particularly, these losses are common in an environment where banks have to process millions of
Table 1.8: Examples of Events under the Execution, Delivery & Process
Management Category
T he Basel Committee defines operati onal ri sk as “the risk of loss resulting from inadequate or
failed internal processes, people, and systems or from external events.” Operational risk includes
Many programs that manage risks in banks take effective management of operational risk as a
fundamental element that is inherent in all banking products, systems, activities, and processes.
T herefore, sound operational risk management reflects the board's and senior management's
10
© 2014-2023 AnalystPrep.
Legal risk is related to the enforceability or breach of contracts, the applicability of laws and
regulations, and the risk of financial loss in the event of mistakes or breaches. Compliance refers to
submission to all the rules and regulations that are relevant to a certain activity as well as the law.
Banks face huge fines and expensive business restrictions that regulators impose as a result of
Reputational Risk
Reputational risk stems from damage to an organization's reputation, public image, or brand due to
the negative impacts of an operational event. It can arise from a variety of sources, including
corporate mismanagement, product failure, safety issues, data breaches and cyber-attacks, employee
events. External actors, such as competitors, activists, and media outlets can also cause reputation
risk.
Strategic Risk
Strategic risk can have a significant impact on the success of an organization. It encompasses risks
associated with decisions related to a wide variety of areas, such as mergers and acquisitions, capital
investments, entry into new markets or product lines, pricing strategies, and restructuring. Poor
strategic decisions may lead to financial losses due to impaired asset values, higher costs, lost
opportunities for revenue growth, reduced efficiency in operations and processes, or missed market
T here are four main tasks that make up risk management frameworks:
Risk identification.
Risk assessment.
Risk mitigation.
11
© 2014-2023 AnalystPrep.
Risk monitoring.
Ri sk i denti fi cati on is the process of identifying and analyzing potential risks that could affect an
organization. T his task involves determining which risks could have a material impact on the
organization’s ability to meet its objectives and then gauging the likelihood of their occurrence. It
also involves documenting any existing controls in place to mitigate those risks. Risk identification
techniques include brainstorming, interviews with stakeholders, failure mode and effect analysis
(FMEA), root cause analysis, examining industry trends, and other methods.
Ri sk assessment involves evaluating the probability of a risk occurring and its severity or impact
if it does occur. T his helps organizations prioritize their attention on the most significant risks so
they can focus mitigation efforts where they are needed most. Different risk assessment
approaches, such as qualitative or quantitative methods, may be used depending on the complexity of
identified risk. Risk mitigation strategies include avoiding certain activities which pose a high risk,
instituting control systems to monitor activities that could create unexpected events, transferring
risks through insurance policies, and establishing quick response contingency plans.
12
© 2014-2023 AnalystPrep.
Ri sk moni tori ng refers to the continuous tracking of risk exposure and how effective existing
controls are at mitigating them. Organizations should regularly monitor identified risks to ensure that
current controls are still valid and adjust them if necessary. Monitoring should also be conducted for
changes in external factors that could cause an increase or decrease in risk levels over time. Among
Heterogenous
Operational risk encompasses an extremely broad range of risks, ranging from external fraud to
cyber-attacks and data privacy breaches. Since each risk type can have different causes,
consequences, and potential losses, it is important for firms to understand the risks they face in
order to plan accordingly. For example, external fraud incidents can be caused by anything from
stolen credit cards to malicious software infiltrations, whereas internal fraud incidents may include
cash theft or rogue trading activities. It is also important for firms to consider the potential financial
impact of each type of operational risk event. Risk financial impact varies greatly depending on the
Idiosyncratic
Oftentimes the severity and frequency of operational risk events are largely dependent upon a firm's
ability and willingness to manage them effectively. Depending on the industry, certain risks, such as
EDPM, might be more or less pronounced due to a company's processes and systems. In addition,
external events which are out of a company's control can cause some risks, such as DPA.
T herefore, it is critical for companies to continually assess their operational risk profile in order to
Heavy-tailed
13
© 2014-2023 AnalystPrep.
T his means that an organization may experience an occasional large loss even as it experiences
numerous smaller losses on a regular basis. As such, traditional statistical models used to predict
operational risk may not be effective due to their lack of capacity to accurately account for these
extreme events. To address this issue, organizations should use alternative methods such as stress
testing or scenario analysis to examine the potential impact of such events and plan accordingly.
Interconnected
A key feature of operational risk is the interconnectedness between different types of risks. For
example, certain control weaknesses or human errors related to IT systems could lead to exposure
across multiple departments and functional areas within an organization. Similarly, external
economic, political, and environmental factors can also have a significant impact on operations due to
With industry developments, operational risks have taken on many different forms. T hese forms
range from terrorist attacks and natural disasters to rogue trading and cyber-attacks. Operating
Operational resilience is the ability of a business to proactively prepare for, respond to, and
prepared and, thus, withstand unprecedented events and crises. T he five main components of
operational resilience are continuity of business services, important business services, impact
Conti nui ty of busi ness servi ces refers to the capacity of a company to sustain essential
operations and functions in the face of an unexpected event or crisis. T his means that an
organization should plan how it will sustain service delivery during such disruptive periods. In
14
© 2014-2023 AnalystPrep.
addition, a company should have mechanisms for monitoring performance and evaluating recovery
Important busi ness servi ces are services that are fundamental to the success of an organization.
Such services include both physical assets and intangible resources (such as skills). Organizations
should be aware of all these important services so they can properly plan how they will manage
Impact tol erance l evel s refer to the degree of adverse impact a business can tolerate before its
operations are adversely disrupted. T his is determined by assessing both internal capacities (such as
should be able to determine their own unique set of tolerable impacts in order to plan appropriately
Management of di srupti on encompasses all activities aimed at minimizing any potential negative
consequences occasioned by an unplanned event or crisis. T his includes developing strategies that
quickly restore normal operations while also ensuring minimal interruption in service delivery
during such times. It also requires having strong communication plans in place so key stakeholders
Lessons l earned involve documenting any insights gained from managing a particular incident or
crisis so as to avert similar occurrences in the future. Companies should create comprehensive
reports outlining what went right and wrong in their response to previous disruptions. T hat way,
they can apply these lessons when planning future responses and mitigation strategies.
US Regulation
T he Federal Reserve (Fed) has issued its Sound Practices to Strengthen Operational Resilience. T he
regulation emphasizes the need for organizations to have a holistic enterprise management
framework. T his encompasses important business services, tolerance impact level, as well as
Fed's guideline stresses that organizations should have a comprehensive understanding of their
critical operations and functions, including their people, processes, technology, and data. T hanks to
15
© 2014-2023 AnalystPrep.
this understanding, organizations can develop strategies to maintain operations despite potential
disruptions. Organizations should consider the areas that may be at risk due to external factors such
as cyber security threats or natural disasters. T hey should then moot plans to respond to the risks.
Besides, they should identify key resources that are necessary for continued service delivery and
2. Operati onal ri sk management: Banks should utilize their expertise in operational risk
management.
3. Busi ness conti nui ty pl anni ng and testi ng: Organizations should have business
relationships and dependencies that must exist between internal and external entities in
5. Thi rd-party dependency management: T his entails managing the reliance on external or
internal entities.
6. Inci dent management: T his involves creating and implementing procedures for recovery
and response. Such procedures are instrumental in handling incidents that potentially
7. ICT, i ncl udi ng cybersecuri ty: Ensures dependable information and communications
technology.
T he three primary regulators that have policies for operational resilience are the UK, the US, and
BCBS.
16
© 2014-2023 AnalystPrep.
Practice Question
XYZ Bank is planning to improve its operational resilience framework. As a risk manager,
you have been tasked with identifying the most critical aspects to prioritize in order to
achieve greater operational resilience. Which of the following should be given the
highest priority?
T he correct answer is C.
While all of the listed aspects are important for improving operational resilience,
identifying and protecting critical business services (CBS) should be the highest priority.
Identifying and protecting critical business services involves understanding the most
important functions and services that the bank provides, assessing their vulnerabilities
and potential impact on the bank's operations, and prioritizing their protection. By
identifying and protecting CBS, the bank can ensure that the most crucial aspects of its
T he other options, such as increasing cybersecurity defenses (A), enhancing backup and
recovery procedures (B), and implementing a more robust business continuity plan (D),
are all important components of an operational resilience framework. However, they are
secondary priorities and are more effective when they support the protection and
17
© 2014-2023 AnalystPrep.
Reading 100: Risk Governance
governance.
Describe and compare the roles of different committees and the board of directors in
Describe the "three lines of defense" model for operational risk governance and compare
Explain the best practices and regulatory expectations for developing a risk appetite for
In June 2004, Basel II published its first changes to regulate operational risk. T his review introduced
three regulatory pillars, broadening the scope of prudential supervision beyond minimum capital
requirements.
T his pillar involves calculating the minimum level of capital banks require to cover the risk of
unexpected losses from credit, market, and operational risks and the minimum ratios required to
Pillar 2 capital requirements can include additional capital requirements ("add-ons") depending on a
18
© 2014-2023 AnalystPrep.
Pillar 3: Market Discipline
Pillar 3 requires that financial institutions disclose their quarterly or yearly financial and risk
information.
After learning that regulatory capital was insufficient to cover operational losses, Basel introduced
mandatory principles for managing operational risk in 2003. In 2011, these principles were later
revised to include lessons learned from the 2007-2009 financial crisis. In March 2021, a revised
version of the principles was published, which saw an increase from 11 to 12.
Table 1.1.: BCBS Revisions to the Principles for the Sound Management
of Operational Risk
Principles:
4. Risk appetite and tolerance statement for the board to approve and periodically review
operational risk.
5. Senior management role in ORM policies and systems development and implementation.
strategies.
10. Robust information and communication technology (ICT ) management program, in line with
ORMF.
After the 2007-2009 financial crisis, Basel II was partially reformed. However, operational risk rules
remained unchanged. T he Basel Committee initiated an operational risk capital reform in 2015. As a
19
© 2014-2023 AnalystPrep.
result, Basel III was updated in December 2017, discontinuing the three-tier regulatory capital
regime for operational risk. T he Standardized Measurement Approach (SMA), later renamed
Standardized Approach (SA), is the new method, effective from January 2023 and shall be in use up to
January 2025.
BCBS greatly influences the operations of major regulatory bodies across the globe. Regulated
institutions are advised to constantly refer to publications local regulators issue. T his will enable
them to meet their regulatory requirements and gain guidance on operational risk management.
Identifying and managing all material risks per the firm's risk appetite and ensuring an
Supervisors are expected to frequently assess the ORM framework of banks. To do this, supervisors
In case the assessment reveals any weaknesses, supervisors should take necessary measures to
In addition, supervisors should support banks' efforts by monitoring, comparing, and evaluating their
performance.
practical exercise and an integral part of all activities. To put it another way, risk management is
fundamental to every business decision, and the staff should be involved at all levels of decision-
making.
Regulators and auditors should ask banks to show how they reach their decisions and examine
To examine whether an ORM framework is being implemented in a firm, the following questions
20
© 2014-2023 AnalystPrep.
should be asked:
Is there evidence that all material events are captured in event reports? Do reports
provide lessons and root-cause analysis? Does this include near misses?
Is the basis for risk and control assessments robust and consistent? Are the right people
involved? Are the assessments challenged and peer-reviewed to ensure consistency across
the organization?
Does the value of each risk indicator come from an independent source? Do line managers
(the risk owners) approve of the indicators as being the best? How often are they
refreshed?
Scenarios: Are they sufficient enough? Do they remain realistic while being sufficiently
Risk reporting: Are the presented data sufficient for decision-making? Does the
Firms are expected to document and report all the activities as evidence for using an operational risk
management framework. In other words, a firm should be able to provide evidence that the practice
takes place. T herefore, all firm committees and management should keep a record of their
To avoid suffering regulatory compliance fines, firms should read and understand all consultation
papers and policy documents to ensure that they meet regulatory expectations. Besides, the staff
should have sufficient knowledge of their material documents. Indeed, they should be asked to
confirm that they fully understand the material in their possession each year.
Whenever there is a new regulatory expectation, a firm should have a team that reviews such
regulations and presents them to the staff during their next meeting.
governance function into their overall risk management governance structure. To achieve effective
risk governance, a firm should establish strong internal controls marked by clearly designated roles
and responsibilities.
A company's operational risk is managed through several committees. T hese committees make
collegial decisions based on information from different levels of the firm's decision-making hierarchy.
T he type of business operations (i.e., corporate banking, investment banking, or support services)
or geographic locations (such as countries or regions) determine the lowest tier of the operational
risk committee setup. T his level of the risk committee oversees operational risk in its respective
area and escalates information to help build an accurate overview of the overall operational risk
profile. In addition, any issues that arise above predetermined limits will be reported to a firmwide
It is important to note that each committee has a distinct purpose and must work within a specific
set of constraints. For example, the corporate banking committee evaluates potential risks arising
from activities in their sector. On the other hand, the investment banking committee assesses
investment-related risks associated with their domain. Similarly, a country-level committee must
gauge potential risks from operations across a single nation. Still, regional committees consider risks
T he operati onal ri sk commi ttee is entrusted with the important responsibility of overseeing,
managing and monitoring operational risks. It presents a comprehensive and consolidated view of all
operational risks to the executive risk management and board risk committees. T he concerned
committee then analyzes and identifies any potential operational risk issues or threats, creates
strategies to control and mitigate these risks, and implements plans to monitor relevant risk
indicators. T hey may also be responsible for developing procedures to ensure that all operational
activities are conducted in accordance with applicable regulations and internal policies.
Furthermore, they must provide regular reports to the executive risk management committee.
T hese reports should include an assessment of current risk levels and the effectiveness of existing
22
© 2014-2023 AnalystPrep.
controls.
T he board-created enterpri se-l evel ri sk commi ttee (board risk committee) oversees all
operational risks. T he committee is vital in ensuring that all potential risks are identified and managed
identify any risks or deficiencies before they morph into larger issues. Additionally, this committee
works in close cooperation with senior executives across various departments to further enhance
an organization's overall control environment. Its efforts help ensure that operations are conducted
safely and efficiently while mitigating any financial losses from emerging risks.
T he board risk committee makes recommendations to the full board with regard to risk-based
T he board of directors is mandated to approve and periodically review the operational risk
management framework. T he board should oversee senior management to ensure that policies,
Establish a culture and processes that help everyone – including board members, managers,
Regularly review the ORM framework to ensure that it considers emerging/evolving risks.
Review and approve operational risk management policies senior management develops.
Ensure that a bank has identified and is managing operational risks arising from external
market changes and other environmental factors. T he board does this by regularly
personnel.
Ensure that management follows the evolution of best practices and avails themselves of
23
© 2014-2023 AnalystPrep.
these changes.
T he Basel Committee defines operati onal ri sk as "the risk of loss resulting from inadequate or
failed internal processes, people, and systems or from external events." It includes l egal ri sk but
excludes strategi c and reputati onal ri sk . Many programs that manage risks in banks take
effective management of operational risk as a fundamental element that is inherent in all banking
products, systems, activities, and processes. T herefore, sound operational risk management reflects
the board's and senior management's effectiveness in the administration of portfolio products,
In modern banking, banks have established several business lines that work with some level of
independence. Furthermore, they all work towards attaining a set of institution-wide goals. Each
business line is faced with its own operational risks and is responsible and accountable for assessing,
Front-line risk management involves all commercial and front-office operational functions or, simply,
business functions.
Keeping track of the operational risk profiles of business units and reporting them.
24
© 2014-2023 AnalystPrep.
Roles and Responsibilities of the Risk Champions and "Line 1.5"
ORM is decentralized by nature, i.e., everyone can take part in managing operational risk.
Nevertheless, not everyone in a firm has the capacity to have a deeper understanding of risk
management. As a result, firms appoint "risk specialists" or "risk champions" within each business
unit. Risk specialists are also known as the line "1.5" or "1.b". T he following are the roles of risk
Keeping track of risk events and losses through gathering and recording data.
Identifying risks and controls in accordance with group definitions (where applicable).
Making follow-ups on the implementation of control rules, risk management action plans,
T his is a functionally independent corporate operational risk function (CORF) involved in policy
setting and provision of assurance over first-line activities. T he CORF generally complements the
standards, and guidelines, as well as designing and delivering operational risk training to
Establishing an independent view of the business units' risk management activity, including
the identification of material operational risks, the design and effectiveness of key
operational risk management tools, measurement activities, and reporting systems and
25
© 2014-2023 AnalystPrep.
providing evidence that the approach is effective.
Reviewing and taking part in the monitoring and reporting of the operational risk profile.
Although the CORF enjoys some level of independence in all banks, the actual degree of
independence varies among banks. T he CORF function in small banks often achieves independence
For
through the separation of duties and independent review of processes and functions.
ensure that all decisions and actions taken align with a bank's risk measurement and reporting
framework. To ensure that the CORF is effective in its work, it should have enough skilled
T he third line of defense consists of a bank's audit function, which performs independent oversight
of the first two lines. Everyone involved in the auditing process must not be a participant in the
process under review. An external party can also conduct the review. T he independent review team
usually reports directly to the Audit Committee (a committee of members of the board of directors)
According to the Institute of Internal Auditors (IIA, 2017), the internal audit should interact with the
and finance functions. T his should not be the responsibility of, or a part of, an internal
audit.
An internal audit should assess the effectiveness and adequacy of risk management,
26
© 2014-2023 AnalystPrep.
compliance, and finance functions. A company's internal audit should never rely
internal controls. T he internal audit itself should always assess a sample of the activities
under review.
As part of its risk assessment, internal audit should make informed decisions regarding the
T he board is responsible for determining the nature and extent of its risk appetite and internal
control systems.
Defining a risk appetite implies assessing a firm's key risks, developing limits within which the risks
are acceptable, and establishing the required controls for these systems. Board directors should
ensure that risk appetite and risk tolerance are defined consistently to drive the priorities of the
According to the 4th principle of operational risk management, the board must identify the types and
levels of operational risks a bank is willing to assume. In addition, the board should approve risk
2. Provide the assumptions and information a bank uses to prepare its business plan.
4. Ensure risk limits align with the bank-wide risk appetite statement.
27
© 2014-2023 AnalystPrep.
Consider all risks when approving a bank's risk appetite and tolerance statements, which
provide details on risk limits and thresholds. In addition, the board should consider a bank's
strategic direction.
Regularly review a bank's risk appetite and tolerance statements appropriateness. During
the review process, some of the factors that the bank should consider include changes in
risk management or mitigation strategies, loss experience, and the frequency, volume, or
Regulatory guidance requires that risk appetite and risk tolerance statements be in line with the
organization's operations.
T he board of directors is responsible for owning and validating the risk limits. T he board usually
According to the Basel Committee on Banking Supervision (BCBS), risk appetite should include the
reasons for taking or avoiding certain types of risks. T he firm has to take risks to meet its objective,
but avoiding risk can also cost the firm. In this regard, the risk-return tradeoff must be addressed in
the risk appetite statements. Risk appetite should be consistent with a firm's objectives and the
firm's risk management strategy. Such a well-articulated risk appetite that is strategically aligned with
a firm's objectives can be used as a guideline for making important business decisions.
To demonstrate their risk appetite and tolerance for disruptions, firms must set maximum impact
tolerances for critical business services. Also, in order for risk appetite and tolerance statements to
be credible and actionable, they must refer to consistent key controls and systems of control.
As a good practice of risk appetite, a risk owner should be assigned to each risk type. T his assignmnt
aims at controlling owners to design, implement, and evaluate controls. Metrics owners collect,
report, and monitor metrics that measure an organization's risk appetite. Owners of risk are
28
© 2014-2023 AnalystPrep.
managers who manage, maintain, and monitor risk within defined appetite and tolerance limits.
Risk Culture
According to the 1st principle of operational risk management, a bank should maintain a strong risk
management culture spearheaded by the bank's board of directors and senior managers. A bank
should strive to propagate a culture of operational risk resilience where everyone understands the
T he board of directors and senior management play a starring role in any operational risk
management framework. With respect to Principle 1, the board of directors and/or senior
management should:
Provide a sound foundation for a strong risk management culture within a bank. With a
strong risk management culture and ethical business practices, a bank is less likely to
such an event, it would be better placed to deal effectively with the outcome.
Establish a code of conduct (or ethics policy) for all employees that outlines expectations
for ethical behavior. T he code of conduct should identify acceptable business practices and
prohibited conflicts.
Provide risk training throughout all levels of a bank. T raining should consider the level of
Banks with a strong risk culture are less likely to be affected by damaging operational risk events. In
fact, they are better positioned to deal with such events when they occur.
T he board of directors must push for the implementation of risk cultures by senior management.
T he directors and senior management promote their organization's risk culture through their own
conduct and by spelling out expectations and consequences for employee conduct. Obviously,
employees would easily emulate what they see than what they are told.
It is easy to implement an effective risk appetite framework where there is already a strong risk
culture. Success on the risk appetite journey is extremely difficult without a strong risk culture.
29
© 2014-2023 AnalystPrep.
To promote a strong risk culture, a firm must have well-documented policies and codes that apply to
everyone in the firm. Creating awareness and alerting people of a firm's policies and rules
Firms should also organize training and compensation structures to reinforce the codes of conduct
and, as such, promote a strong risk culture. Educating all participants about operational risks
embedded in activities and processes is another critical component of creating a sound risk culture.
30
© 2014-2023 AnalystPrep.
Practice Question
XYZ Bank has recently adopted the "three lines of defense" model for operational risk
governance. As a risk manager, you are asked to review a specific business unit's
adherence to the model. Which of the following scenarios correctly reflects the roles
audit performs risk assessments, and the risk management function monitors
compliance.
and implements risk management policies, and internal audit performs risk
assessments.
C. T he front office identifies and manages risks, the risk management function
function identifies and manages risks, and internal audit oversees risk
management.
T he correct answer is C.
T he "three lines of defense" model for operational risk governance consists of three
for identifying, managing, and mitigating risks in their day-to-day activities. T hey
policies.
ii. Second Line of Defense: T he risk management function oversees the risk
risk management policies, providing guidance and support to the first line of
31
© 2014-2023 AnalystPrep.
iii. T hird Line of Defense: T he internal audit function performs independent
internal control systems. T hey provide assurance to senior management and the
32
© 2014-2023 AnalystPrep.
Reading 101: Risk Identification
After compl eti ng thi s readi ng, you shoul d be abl e to:
Compare different top-down and bottom-up approaches and tools for identifying operational
risks.
Describe best practices in the process of scenario analysis for operational risk.
Describe and apply an operational risk taxonomy and give examples of different taxonomies
of operational risks.
Describe and apply the Level 1, 2, and 3 categories in the Basel operational risk taxonomy.
Ignored risks can pose serious challenges to a company. T herefore, the comprehensiveness of the
risk identification exercise will determine how well-prepared an organization is for unfavorable
events.
Risk identification is the first of the four crucial components of the risk management framework.
T he other three are risk assessment, risk mitigation, and risk monitoring.
A company's board/executive sets the initial top-down criteria for the risk identification process. A
company establishes departments, business units, and specific business processes from there.
Besides, a company uses bottom-up risk identification method, in which different businesses and
33
© 2014-2023 AnalystPrep.
Top-down risk identification aims to isolate the most significant corporate risks that could
compromise strategic goals. T he approach employed in top-down risk identification is risk ranking
and prioritization.
An analysis of the potential consequences of risks and projected effects on revenues should support
the planning of top-down risk identification brainstorming sessions with senior risk owners.
T he frequency of risk identification exercises at the executive level can be between two to four
times annually, based on a business' growth and development as well as the magnitude and rate of
When carefully carried out with participation from senior management, top-down risk identification
is a useful tool for identifying the most important risks, boosting performance, and averting
Bottom-up risk identification is the process carried out at the local company level, in a department,
T he location inside the organization where risk identification takes place is the primary distinction
between top-down and bottom-up risk identification methods. Senior leadership is involved in the top-
down process of identifying risks, whereas staff members—including managers and non-managers—
Every financial institution has an inherent business risk vulnerability. Among others, risk
vulnerabilities may stem from major sources of revenue, key customers, key persons, and
regulators. Major risk exposures for a business typically include important third parties and large
company projects. Should a failure occur in one of these activities, exposure increases the effects
34
© 2014-2023 AnalystPrep.
of risk.
Vulnerabilities are the weakest points in business operations. Resistance to risk management,
overdue resolution of issues, and outdated processes enhance business operation vulnerabilities. A
combination of vulnerabilities and potentially significant exposures can lead to huge losses or,
T he key benefit of using a list of exposures and vulnerabilities as a brainstorming technique for risk
Risk Wheel
A risk wheel is a brainstorming tool used in risk identification workshops to promote originality and
ideas.
35
© 2014-2023 AnalystPrep.
Natural disasters such as floods can have a negative effect on the supply chain and, in turn, affect
business continuity. For instance, a power outage in a city where a fintech company's data center is
located can cause a business disruption for several days. Long-term business interruption negatively
When risk management prioritizes mitigating risks that can have adverse effects on other risks, it is
far more effective in preventing a "domino effect." T he key to effective risk management is treating
36
© 2014-2023 AnalystPrep.
Emerging Risk Identification: Horizon Scanning
We can distinguish between known emerging risks that have already been observed in the past, such
as employee well-being, and other nascent emerging risks that are only now starting to manifest on
Every year, several studies, polls, and publications, such as “T he Global Risks Report” by the World
Economic Forum, gather opinions from risk and business professionals on the risks they believe will
dominate the agenda in the coming year. Even then, it is unlikely that all these risks will be relevant
to a firm. For this reason, risk identification must be applicable to an organization's unique
circumstances.
Environmental
37
© 2014-2023 AnalystPrep.
A company should concentrate its monitoring work on changes that affect its mission and approach,
38
© 2014-2023 AnalystPrep.
evolutions that could affect its long-term objectives, and new environmental changes that may affect
business exposure or value drivers and threats that will be considered in scenario analysis and
emergency preparation.
Some firms will use human expertise by creating "emerging risk committees" consisting of
specialists from every area and region the firm operates in. Members of the committee gather
Event and Loss Data Analysis: Internal Losses, External Losses, and
Near Misses
To detect operational risks, historical events at a company or among peers are a natural source of
Internal losses can be used to gauge how concentrated an organization's operational risk is. Internal
losses that keep happening could indicate a problem with the internal controls. Alternatively, such
losses could represent the degree to which a company is exposed to operational risk. It is, therefore,
imperative for a company to plan for these losses and factor them into the cost of financial products
or services.
External losses, such as operational loss events, are typical data sources for identifying and assessing
risks. A recommended approach is keeping an eye on all significant happenings that peers report and
logically considering whether the same fate could befall your organization. If yes, your organization
should assess its system of risk-mitigating procedures and controls to lessen the possibility of
suffering losses.
Near misses are occurrences that might have led to a loss but did not as a result of good fortune or
action outside an organization's control. An example is sending funds to a wrong recipient but having
the transaction reversed before you incur any loss. Near misses reveal flaws in the control system
and show what might occur. T he majority of businesses report near misses, which are then used as
39
© 2014-2023 AnalystPrep.
Risk and Control Self-assessment (RCSA)
T he process through which an organization or business line assesses the possibility and impact of its
operational risks is known as a risk and control self-assessment (RCSA) exercise. RCSAs result in a
self-evaluation of a business unit's primary inherent risks, the key controls reducing those risks, and
RCSAs are often used as risk identification exercises where business units come together in
workshop meetings to discuss their concerns, and combine risk identification and assessment phases.
Selection of the appropriate participants significantly influences the quality of results in RCSA
sessions. When conducting risk interviews, it is advisable to include two sorts of employees: those
who have worked for a company for a long time and are familiar with its procedures and culture, as
In mature firms, RCSA exercises are often conducted yearly and revised upon any major and
Process Mapping
Process mapping is a method of considering the risks associated with a set of activities. It involves
outlining the tasks of a process step-by-step and asking what could go wrong in each phase. It can also
be done by evaluating controls and determining the risks they are meant to address.
In order to use process mapping as a technique for risk identification, one must first identify the risk
that each control mitigates and then consider what could go wrong at each stage of the process.
T he first phase in scenario analysis, a crucial part of operational risk and capital evaluation, is
scenario/stress-test identification. T he scenarios that are evaluated in risk management often depict
infrequent but extremely destructive catastrophes for companies, such as natural disasters and large
40
© 2014-2023 AnalystPrep.
cyberattacks.
T he Basel Committee defines scenario analysis as "a process to find, assess, and quantify a range of
possibilities, including low probability and high severity events, some of which could result in
T he Basel Committee recommends that banks should develop forward-looking business continuity
plans (BCP) with scenario analyses linked to appropriate effect assessments and recovery
Every scenario should go through a business impact analysis (BIA), also known as a business effect
assessment, which weighs the financial, operational, legal, and reputational ramifications.
Recovery time objectives (RT O) and recovery point objectives (RPO) should be established as part
of continuity procedures. Any third parties whom the real scenario would impact should be informed
of the guidelines.
Consistency of approach and the reduction of behavioral biases are two major difficulties in scenario
analysis. To meet regulatory requirements, a scenario analysis must produce quantitative and
qualitative reproducible outcomes. T he rationale for each scenario must be articulated. In addition,
assumptions must be based on empirical evidence. Finally, the procedure for creating scenario
T he preparation phase also entails assembling a "prepared pack" of documents that will be used to
select and evaluate scenarios. T hese documents include internal loss data, external loss data, RCSA
results, key risk indicator scores, audit issues and logbooks recording other issues, concentrated
exposures, and any other relevant documents for risk and exposure assessment.
Senior managers from various corporate and business functions who have knowledge of the risks in
their field should participate in scenario analysis workshops and brainstorming sessions. T hough not
41
© 2014-2023 AnalystPrep.
a requirement, external experts may also be included in these sessions in order to mitigate biases
such as myopia (the overestimation of recent events) and an overreliance on scenarios with external
causes.
T he generating phase, which comes first in a scenario analysis workshop, aims to create a lengthy
list of scenarios. T hese scenarios are taken into consideration for inclusion on a short list of
Scenario selection is an intermediary stage in which certain scenarios are combined, some removed,
and others added to create a significant list to be adequately evaluated. To devote more time and
energy to the review of larger and more pertinent situations, scenarios that appear to have little
To establish whether it has left out any important scenarios or risk factors, a firm may compare the
scenarios it has created with a list of scenarios for the industry. T his should be done after the
scenario generation exercise to avoid the externally obtained scenarios biasing the scenario
generation process.
According to the Basel Committee, operational risk is the risk of loss originating from inadequate or
Cause, event, and impact are the three components of risk, according to the International
Organization for Standardization (ISO). Each risk is defined as a particular combination of a single
cause, a single occurrence, and a single impact. Example: “Risk of cyberattacks (impact) due to
phishing (risk) caused by delays in educating staff on how to prevent phishing (cause).”
T he more precisely a risk is described, the simpler it is to connect it to its origins and effects,
Technology-related risks are described as potential occurrences brought about by malfunctions such
as system outages. Besides, manual processing is not a risk but a factor that raises the possibility of
other risks. Risk assessment must consider the underlying causes of operational risk since
42
© 2014-2023 AnalystPrep.
processes that rely on technology may experience losses more frequently than manual processes.
Compliance with the regulations needs to be a priority. T he risk is not in compliance per se but in
breach of compliance. Regulation increases risks of losses such as compliance violations. Large
penalties and fines may be imposed on a company that fails to follow regulations.
Risks should be defined explicitly and concretely in order to recognize and identify them. T he more
accurately a potential diverse event is described, the easier it will be to estimate the likelihood of its
occurrence and to identify the appropriate mitigating steps. T he best practice in the financial sector
is to separate and categorize the many components of uncertainties, such as cause, risk, impact, and
control.
Taxonomies are a structured manner of expressing causes, risks, impacts, and controls in
T here are three years in the official taxonomy of operational risks for banking institutions.
While level 3 is used as instances and illustrations, the Basel Committee only acknowledges the first
43
© 2014-2023 AnalystPrep.
Event-type
Categories Activity Examples
Category
(Level 2) (Level 3)
(Level 1)
Transactions not reported (intentional)
Unauthorized Transaction type unauthorized (w/monetary loss)
activities
Mismarking of position (intentional)
Many businesses employ a different operational risk taxonomy that is more tailored to their
individual exposures and is dependent on the type, scope, and location of their operations.
An updated operational risk taxonomy was deemed necessary. Operational Risk Data Exchange (ORX)
developed as a result of global financial sector trends such as technical breakthroughs, digitization,
increase in cybercrime, and globalization of financial institutions with foreign operations. ORX is
T he taxonomy offers some observable alterations but does not considerably depart from the Basel
categories. Compared to Basel, it has 14 level 1 risk types, and some level 2 risks have been upgraded
A large proportion of businesses have level 2 risks that are better categorized as control failures,
Over the past ten years, the way risks are handled has evolved. While some businesses may classify
44
© 2014-2023 AnalystPrep.
Arguably, lack of a universal reference is also to blame for the diversity of risk taxonomies used in
Clarifying and categorizing operational risks, causes, impacts, and controls is critical in the
development of an operational risk inventory and subsequent assessment of such risks. In addition, it
Causes Taxonomy
Basel defines the taxonomy of causes as the possibility of financial loss occasioned by internal
procedures, personnel, and systems that are insufficient or ineffective or by external developments.
People, processes, systems, and external events are the natural level 1 causes of operational risk
events.
T he Basel Committee defines operational risk as the risk of loss, both directly and indirectly.
Financial loss, reputational loss, regulatory non-compliance, and customer detriment are four
T he ORX has divided ‘PPSE’ level causes into people, processes, systems, and external events.
T hese four categories are divided into competence, performance and ethics, systems design,
performance and testing, process design, and governance failures at level 2. External events are
Impact Taxonomy
T he impact categories list operational risks that a company is exposed to as a result of operational
T he taxonomy separates the impact effects such as disruption, reputation, regulatory enforcement,
and types of stakeholders affected. T he stakeholders could be customers, employees, or even third
parties.
45
© 2014-2023 AnalystPrep.
Control Categories
T he four primary types of controls are preventative, corrective, detective, and directive controls.
Preventati ve control s: T hese are used to lessen the possibility that risks will
materialize.
Detecti ve control s: Detection controls seek to minimize the effects of incidents and
Di recti ve control s: T hese include the predetermined rules, practices, and training that
taxonomies for an industry sector, can lead to a wide variety of risks that can have a number of
effects. It is preferable for an organization to adopt a subset of these sector-wide taxonomies for its
own risk inventory, often known as a risk universe. T he inventory is a list of each of these risks, and
it may be searched for, sorted, and used in many other ways to assist a business to understand the
risks it faces.
Ri sk uni verse: T his is a list of all the risks a company thinks it faces.
Emergi ng ri sk s: T hese are threats that a company has recognized as impending, modest,
but growing and could potentially have a big impact in the future.
Top-ten ri sk s: T hese are the greatest risks to a company in terms of likelihood and effect
combinations.
Stress or shock scenari os: Even if they are highly unlikely, these events might have a
46
© 2014-2023 AnalystPrep.
Ri sk regi ster: A risk register is the principal repository for operational risks in financial
institutions. It contains all operational risks an institution faces, the controls put in place to
address each risk, and an assessment of the risk's likelihood and potential impact.
When it comes to maintaining and updating risk registers, it is normal to do so once every quarter or
once a year for scenario identification and analysis, and once every three months for emerging risk
47
© 2014-2023 AnalystPrep.
Question
Bank XYZ is in the process of reviewing its operational risk management framework.
bottom-up approaches for identifying operational risks. Which of the following correctly
describes the main difference between top-down and bottom-up approaches for
assessment of risks.
responsibility.
T he correct answer is C.
Top-down and bottom-up approaches are two distinct methods for identifying operational
strategic perspective. T hey involve the identification and prioritization of risks at the
organizational level, often incorporating the organization's risk appetite and strategic
Bottom-up approaches, on the other hand, involve employees identifying risks within
granular level, often incorporating the knowledge and expertise of employees who are
48
© 2014-2023 AnalystPrep.
closest to the processes and activities that may give rise to operational risks. Examples
of bottom-up tools include incident reporting, internal loss data collection, and scenario
analysis.
49
© 2014-2023 AnalystPrep.
Reading 102: Risk Measurement and Assessment
After compl eti ng thi s readi ng, you shoul d be abl e to:
Explain best practices for the collection of operational loss data and reporting of
Explain operational risk-assessment processes and tools, including risk control self-
Describe the differences among key risk indicators (KRIs), key performance indicators
Describe and distinguish between the different Quantitative approaches and models used to
Estimate operational risk exposures based on the fault tree model given probability
assumptions.
Describe approaches used to determine the level of operational risk capital for economic
Describe and explain the steps to ensure a strong level of operational resilience and to test
Operational Loss Data (OLD) is essential for successful risk management. OLD is the information
that organizations use to identify, measure, monitor, and control operational risks. Collecting and
analyzing incident loss data provides important insights for scenario identification, risk assessment,
50
© 2014-2023 AnalystPrep.
Organizations must be able to identify control breaches and weaknesses to prevent further losses
and improve business performance, stability, and profitability. T his is why internal incident data
collection is important. Collecting the right kind of data enables organizations to identify the root
causes of incidents that have occurred in the past, as well as potential incidents that could occur in
the future. By understanding the details of each incident (e.g., processes/control breakdowns),
organizations can design effective strategies to address them and reduce their losses over time.
Furthermore, having a good understanding of incident data helps organizations create more accurate
scenarios for risk assessment purposes. T his allows them to better predict how much capital they
need to reserve for operational risks. Moreover, analyzing internal incident data provides valuable
information regarding which areas require additional controls or improvements. T his helps
In addition to collecting internal incident data, it is also beneficial for organizations to analyze
external loss data from other firms. Doing so provides rich insights into the risk exposure for other
companies. Organizations can use these insights to compare their own operations against those of
their peers or competitors—which helps them identify any areas that need improvement—and then
design appropriate strategies accordingly. Such comparisons are especially useful when it comes to
designing capital models. T his is because they allow companies to benchmark against industry
standards and ensure that they hold enough capital reserves relative to their peers—without over-
reserving or under-reserving for risks a company faces (in comparison to its peers).
Beyond this, incident data can also play an important role when it comes to Pi l l ar 2 capi tal
requi rements. T he collection and analysis of loss data provide valuable insight into operational risk
levels at a bank, allowing senior management teams to make informed decisions that help reduce
their regulatory capital add-ons under Pillar 2. Ultimately, this allows banks to remain compliant with
global regulations while ensuring that they are able to allocate sufficient capital where it will have
the most impactful effect on their business operations. As such, comprehensive incident data
collection and analysis play an important role when it comes to navigating modern financial risk
management guidelines.
51
© 2014-2023 AnalystPrep.
Regulatory Requirements Regarding Operational Risk Data
Collection
T he incident data collection process is a key component of an effective risk management strategy.
By understanding the details of each incident that occurs in your organization, you can better prepare
for future incidents and make adjustments to minimize losses due to future events. Additionally,
collecting incident data can help you identify any potential areas of improvement or opportunities for
optimization in your current procedures. Lastly, having a complete record of past incidents allows
you to review trends over time and understand the cause-and-effect relationship between different
events.
T he Basel Committee recommends that “a bank must have documented procedures and processes
for the identification, collection, and treatment of internal loss data” (BCBS, point b).
When collecting incident data, there are several pieces of information that should be included in
If possible, you should also include information about any corrective actions taken or suggested
Operational risk events differ from those caused by market and credit risks due to their mostly
localized origins and ambiguous character. An operational risk event could be a virus, a bug, or an IT
server crash that causes a disruption in operations. However, these different types of risk events
cannot be labeled the same way. Events, such as a data breach and an account takeover, may seem
52
© 2014-2023 AnalystPrep.
similar, but they can have distinct impacts and require different processes to identify and mitigate
them.
While market and credit risks usually follow easily identifiable external conditions, operational
events change more subtly, and their effects are harder to predict. For example, imagine a bug in a
digital banking app that results in delays in payment transfers for clients. Such an occurrence could
have wide-reaching repercussions beyond those immediately present at the time of failure. T hese
delayed payments can lead to customer complaints, demands for compensation from the bank, and
negative reviews on social media—all of which damage the reputation of the bank’s services and
mean extra costs in terms of management attention and IT resources. T he identification and
quantification of the impacts of such an event is less straightforward than recording a credit loss on a
It is virtually universal that companies across all sectors utilize the same set of core data fields
whenever operational incidents are reported. T his allows for better internal and external
benchmarking and visibility, helping the business to increase its efficiency on many levels. Although
the inclusion of more data fields can add to a comprehensive understanding of any given incident, it
also poses several risks. Too much information can lead to reporting and analysis overload as well as
excessive use of resources. As such, it is best practice to include only the most essential data points
and avoid overcomplicating the intake process. Furthermore, efforts should be made to consolidate
these fields wherever possible in order to reduce any extra strain on staff or systems.
53
© 2014-2023 AnalystPrep.
Unique Incident ID
Place of occurrence (business unit/division)
Event type (level 1, level 2)
Event title and description (as standardized as possible)
Cause type (level 1, level 2)
Controls that failed
Dates of occurrence/discovery/reporting/settlement
Expected direct financial loss (may evolve until closure)
Impact type: loss/gain/near miss
Indirect effects (per type): often on a scale
Recovery (insurance and other recoveries)
Net loss (gross loss minus recovery)
Actions plans (when appropriate): measures, owner, time schedule
Link with other incidents (if any)—for grouped losses and reporting
Other comments if necessary
Comprehensive Data
internal loss data, capturing al l materi al acti vi ti es and exposures in all their appropriate
subsystems and geographi c locations. T he committee has set a minimum threshold for loss
reporting at €20,000 (about $22,000). Unfortunately, the committee does not clearly define what
constitutes "material" activities or exposures. Due to this, many firms set their own reporting
thresholds at or below €20,000 (about $22,000). T his has led to the practice of setting reporting
thresholds at zero to capture every operational loss or simplify instructions to the business units
that do not need to estimate a loss before deciding to report incidents. While this strategy may have
an appeal in its simplicity, it is fading away among large banking institutions because of the sheer
number of small incidents that must be reported with little information value gained in return.
Instead, most banks and insurance companies prefer a threshold slightly lower than the regulatory
54
© 2014-2023 AnalystPrep.
Regulatory guidelines dictate that firms must report any incidents causing them financial losses. But
from a management perspective, it's also good practice to record the non-financial impacts
associated with any material operational risk events. T his includes reputational damage, customer
detriment, disruption of service, and use of management time and attention. While these events may
not seem to have an immediate financial cost associated with them, they can often lead to costly
consequences down the line. Regulatory scrutiny or customer dissatisfaction can both put major
strains on a firm's resources, while remediation plans or increased management attention can be
expensive investments in the future. Ultimately, when these non-financial impacts are ignored and
not properly taken into account alongside financial costs, the true cost of operational risk and poor
It is important to remember that both direct and indirect losses must be reported. Direct losses are
the ones incurred immediately after the event: for example, the cost of remediation, any financial
outcomes due to illegal transactions, or compensation to clients. Indirect losses are much trickier to
identify since they stem from further consequences of an operational risk event. T hese could
include customer attrition, low employee morale and productivity levels, compliance costs resulting
from regulatory scrutiny, and increased insurance premiums following claims. It is important that
organizations set up a reliable system by which they thoroughly capture indirect losses, often by
assigning them a 1-4 i mpact rati ng based on the established Impact Assessment Matrix.
Grouped l osses are an important concept in operational risk management, as they provide strong
evidence that one root cause can lead to multiple geopolitical, financial, and legal consequences, each
of which must be understood and addressed. Grouped losses are defined as distinct operational risks
55
© 2014-2023 AnalystPrep.
connected to a si ngl e core event or cause. For example, if an IT failure occurs, impacting various
departments in different ways, this sequence of events would likely constitute one grouped loss.
Similarly, if the same wrong advice were provided to numerous customers leading to a series of
compensation claims, it would also meet the criteria of a grouped loss. Regulators require actuaries
and other financial risk analysts to group losses to ensure the real underlying cause is correctly
Banks are required to report not only gross loss amounts associated with each operational loss event
Date of accounti ng: When did the financial impact enter the general ledger?
T he time gap between each pair of dates provides interesting insight. For example, the time gap
between occurrence and discovery is indicative of an organization's visibility into issues on its
operations side. On top of that, the difference between discovery and reporting provides insight into
how diligently incidents are reported to the risk function – an important step for mitigating risk
exposure. Regulations do not require a specific date type for internal risk reporting purposes,
leaving organizations to apply their policies. Generally speaking, organizations should consider a risk-
based approach rather than rushing to log immaterial events, which could end up wasting resources.
It's generally accepted that material incidents should be reported within a few working days. On the
other hand, minor incidents can simply be included in periodic summary reporting.
Lags in even materialization can be quite revealing. Many of the large losses that appear on accounts
can be attributed to events or accumulated risk exposures that occurred several months or even
years earlier. Consequently, management should duly consider the implications of this lag when
56
© 2014-2023 AnalystPrep.
modeling future operational events.
Given the variety of operational losses that bankers and financial institutions must address, the Basel
Committee has provided helpful guidance on the best way to classify events, particularly with
respect to boundary event reporting. T he boundaries it has established are particularly important to
those events related to credit risk. Such incidents should only be accounted for in the lost dataset if
they are not already recorded in credit risk-weighted assets (RWA). Meanwhile, operational losses
related to market risk criteria must always be treated as an operational risk when calculating
Boundary events are occurrences that arise in a different risk category compared to their cause. For
example, a credit loss may be aggravated by the incorrect recording of collateral, while a market loss
could arise from an unintentional mistake when booking a position. International regulators have
adopted the pragmatic view of not separating the exact cause of these boundary events. After all,
credit risk models for capital requirements are based on simply capturing past losses in whatever
way they occurred. So, even if operational risk caused those losses, they are still covered under
credit risk and deemed acceptable from a regulatory viewpoint. T his approach has allowed financial
firms to avoid spending time dissecting each event and instead focus on addressing potential problems
According to the Basel Committee on Banking Supervision (BCBS), banks must initiate processes to
i ndependentl y revi ew their loss data collection to ensure its comprehensiveness and accuracy.
While a good starting point may be to simply track losses associated with daily operations, other
sources should also be taken into account if the bank wishes to assess the depth and range of its data.
Reconciling loss data with information from general ledger accounts, IT logs, and other records can
help to provide insight into any potential material losses that may have been overlooked or under-
reported. T hough this process involves some work upfront in developing adequate systems for
capturing and collating reportable losses, it ensures compliance with BCBS regulations and provides
57
© 2014-2023 AnalystPrep.
valuable insights for informed decision-making.
Risk and control self-assessment (RCSA) is an ongoing cycle of risk assessment, evaluation, and
management that enables organizations to identify and mitigate risks while controlling costs. It
58
© 2014-2023 AnalystPrep.
involves a systematic review of risks in an organization's processes, products, and services, as well
as assurance that its processes are adequately managed while also maintaining compliance with laws
and regulations. T he RCSA looks at both inherent risks and the residual risks that remain after
applying existing controls in order to create a more complete picture of the operational risk they are
facing. Historically, RCSAs have been conducted yearly as part of the organization's standard risk
assessment program. But due to increasing volatility and complexity in the market, some
organizations are now conducting these assessments on a quarterly basis. By evaluating the likelihood
and impact of risks regularly and consistently over time, businesses can better understand their risk
profile and instill confidence that their operations remain safe and secure.
While useful initially, RCSA can fall prey to subjectivity, behavioral biases, limited data inputs, and
practices. Standardized risk descriptions and precise rating criteria must be used to achieve
maximum reliability and minimum inconsistency. T his ensures that each risk has undergone a
consistent assessment process regardless of where it originated from, allowing for an accurate
Severity assessment is an important tool for assessing the potential impact of events. T hrough this
kind of assessment, it is possible to identify the most severe impacts that a certain event could
cause. T he four most common scales used to measure impact are financial, regulatory, customer, and
reputation impacts. Each of these scales uses a four or five-point rating system, with the ratings
ranging from very low or insignificant to catastrophic or extremely severe impacts. In addition to
these traditional scales, there has been an increase in the consideration of the continuity of services
T he figure below shows a sample impact scale where financial impacts are expressed in revenues or
operating profit percentages rather than in monetary amounts. Similarly, customer impacts are
expressed as a percentage of the client base. Expressing financial impacts in percentages rather than
dollar amounts is a great way for organizations to make sure that the impact statements are
applicable to departments of all sizes. For instance, while a $100,000 financial impact might not be
crucial for large, successful business units, it could have a much larger and more meaningful effect
59
© 2014-2023 AnalystPrep.
on smaller departments or offices. Relative impact definitions such as this provide organizations with
an easy way to scale their standards as they expand - allowing them to function efficiently across
60
© 2014-2023 AnalystPrep.
Rating Financial Service Customer Regulatory
Delivery Reputation
Significant
reputational
Critical impact,
Significant
disruption possibly
compliance
of service long-lasting
>20% of breach,
resulting affecting the
operating resulting in
Extreme in major organization's
income large fines
impacts to reputation
(OI) and increased
internal and trust
regulatory
or external towards
scrutiny
stakeholders several groups
of key
stakeholders
Significant Reputational Compliance
service impact breach
disruption affecting >5% resulting in
affecting key of customers. regulatory
stakeholders, Lasting fines leading
Major >5-20% of OI
requiring the impacts to lasting
crisis needing remediation
management substantial programs with
plan to be remediation reputation
activated plans damage
Noticeable Minimum
service reputation Some breaches
disruption or delays in
impact
with minimum regulatory
affecting only
consequences compliance,
a limited number
for stakeholders, needing
Moderate >0.5-5% of OI of external
service recovery immediate
stakeholders.
on or under remediation
Temporary
but without
the RTO impact is
(recovery time a lasting
mitigated
impact
objective) promptly
Minor
administrative
compliance
No service breach, not
No external
disruption to affecting the
Low <0.5% of OI stakeholder
external organization's
impact
shareholders reputation
from a
regulatory
perspective
Risk managers rely on likelihood assessment scales to determine the probability of an event
occurring or how often it might occur within a given time frame. For example, when discussing a
61
© 2014-2023 AnalystPrep.
one-in-ten-year event, this description actually refers to an event that has a 10% chance of
happening in the next year – not once every ten years. Risk control self-assessment (or RCSA)
exercises typically have a one-year or less time horizon and may even be shorter depending on the
Probability of
Qualitative Frequency
Occurrence %
Rating of Occurrence
(one-year horizon)
Once per year
Likely >50%
or more frequently
Possible 1-5 years 20-50%
Unlikely 1 in 5-20 years 5-20%
Remote <1 in 20 years <5%
When it comes to understanding and managing risk, organizations increasingly rely on the RCSA
matrix or "heatmap." T his visual tool assigns colors to different combinations of likelihood and
impact, painting a picture of which areas pose the highest levels of risk. It encourages strategic
Green (or its equivalent) typically implies that the current level of risk exposure is within
Yel l ow indicates that action or mitigation may be needed because, even though the risk
exposure is still within the appetite, the organization is rapidly moving toward excess
levels.
Amber shows that risk tolerance has been exceeded, necessitating an action plan to
Red indicates a level of risk far beyond the organization's risk appetite, and drastic
62
© 2014-2023 AnalystPrep.
proactive steps must be taken to protect the organization from harm.
It’s important to note that the colors are used in a nonlinear, non-continuous manner. T herefore, it
would be a mistake to multiply likelihood and impact to reduce risks to a single numerical quantity.
For example, an event that is frequent but low impact (1x4) cannot be the same as an event with a
remote probability but an extreme impact (4x1). T herefore, caution should be taken both when
A key risk indicator (KRI) is a metric used in operational risk management to measure an
organization's exposure to potential risks. KRIs provide insight into the l i k el i hood and i mpact of
different types of risk as they evolve over time. T wo main types of KRIs exist: preventive KRIs and
reactive KRIs.
Preventi ve KRIs are used to measure the increase or decrease in the likelihood or impact of a
63
© 2014-2023 AnalystPrep.
potential risk event over time. For example, if a KRI of likelihood shows an increase in the
probability that a risk might materialize, then an organization must take preventive measures to
mitigate this risk. On the other hand, a KRI of impact, demonstrates how severe a potential risk
event could be if it were to occur, allowing organizations to better understand what kind of mitigation
Reacti ve KRIs are used after an incident has already occurred, since they provide key information
about actual and potential losses associated with the incident as well as possible root causes and
corrective actions taken afterward. T his allows organizations to have a clear picture of what went
wrong and how it can be prevented from happening again in the future. Additionally, these types of
metrics can help organizations identify areas that need improvement when it comes to operational
T he number of transactions per staff member (could indicate the risk of making errors).
An increase in the level of sales required for sales staff to achieve a performance goal.
An increase in the amount of responsibility held by key employees (could indicate the
Increase in sensitivity of data held on a given server (higher impact in case of data
leakage/loss).
Increase in value generated by top-10 clients (higher impact in case of client attrition).
KPIs measure a company’s performance against predetermi ned targets. T hey can be used to
evaluate how well a company executes its strategy by comparing the current performance against
targets or goals set out in the business plan. KPIs are usually expressed in terms of financial
64
© 2014-2023 AnalystPrep.
measurements such as revenue growth or return on investment, but in the context of operational
Maximum downtime.
By using KPIs, organizations can track progress toward their goals and ensure they take effective
KCIs measure the effecti veness of i nternal control s within an organization in order to detect
fraud or other irregularities. T hey provide insight into how well existing processes function and alert
managers when something appears off—allowing them to take corrective action quickly before any
major issues arise. KCIs help to ensure that organizational policies are being observed and that any
potential weak spots are identified early on so that appropriate measures can be taken to address
them.
It is common for metrics to overlap in their use as performance indicators, risk indicators, and
control indicators. Quite often, the same metric can be used for two categories or even all three.
T he most simplistic example is when actual performance falls below a pre-defined minimum
standard. Not only does this indicate weak performance, but it also signals a potential source of risk.
65
© 2014-2023 AnalystPrep.
An issue with performance can quickly become an issue with risk management if not addressed
appropriately. Moreover, delayed financial confirmations of financial transactions can reflect poorly
on an organization’s back office efficiency (KPI), create higher possibilities for legal action or fraud
(KRI), and point to an organizational failure in applying controls during transaction processing.
Fault T ree Analysis (FTA) is a type of analysis used to identify the root causes and potenti al
consequences of operational risks. It is a top-down approach that examines the different ways an
event or failure can occur, allowing organizations to visualize the sequence of events that lead to an
incident. FTA involves creating a fault tree diagram, which is a graphical representation used to
depict all of the possible failures that could contribute to an event. T his diagram looks like a tree
structure with root causes at the top and branches for each subsequent cause-effect relationship
66
© 2014-2023 AnalystPrep.
An important part of FTA is performing quantification, which allows for further assessment of
operational risks. Quantification involves assigning probabilities to each component of the fault tree
in order to create an overall picture of how likely it is for certain events or failures to occur. T his
helps organizations understand how many components need to be taken into account when
implementing preventative measures or contingency plans. Additionally, FTA can help establish
which components are most critical in mitigating risks, which will help prioritize efforts during risk
management planning.
FTA relies on a series of both dependent and independent conditions that are connected in the form
of AND or OR conditions. T he joint probability of failure for these conditions can be calculated by
multiplying the individual probabilities. For example, if three key independent controls each fail with
1
a 10% likelihood, then their joint probability of failure is 0.13 = .
1000
By layering multiple independent controls onto the same system, organizations can reduce the
overall risk they face by decreasing the chance that all controls will fail simultaneously. However,
even with many layers of safety measures in place, some events may still occur simultaneously due
to various external factors or chance occurrences. FTA helps organizations determine what these
risks are before they occur and prepare for any necessary contingencies. To achieve this end, FTA
provides organizations with a comprehensive view of how their systems interact and what risks they
face from all possible combinations of events that could happen at any given moment.
FTA can also be used proactively in order to assess potential risks before they actually occur. By
simulating scenarios, organizations can identify new areas where they need additional safeguards and
control measures in place in order to minimize operational risk exposure and improve their
resilience against future incidents or disruptions. Additionally, FTA can also provide insight into
developing trends within operational processes so that organizations can take proactive actions
before any issues occur and ensure sustainable business continuity going forward.
Let’s consider the following conditions that all need to be present for the scenario to materialize:
1. A malicious external actor targets a company with a phishing email laden with malware. P1.
67
© 2014-2023 AnalystPrep.
2. T he company’s firewalls fail to detect malicious emails. T he firewalls have been
5. Large amounts of valuable information are stolen from the firm's systems. P5.
If we assume that these conditions are fully independent, the likelihood of this risk materializing is
the simple product of the individual probabilities: P1 × P2 × P3 × P4 × P5. T his constitutes the
minimum theoretical likelihood of the scenario. For example, if the probability of each of these
68
© 2014-2023 AnalystPrep.
The FAIR Model
T he FAIR model is a powerful factor model many financial institutions use to quantify operational
risk. T he risk factors and their relationships are first identified before measurements and metrics
for each factor are calculated. T his information is then combined in order to bring about the overall
risk value for the organization. What makes the model particularly advantageous is its ability to
provide precise analytical processes that give clarity into the various risks, thereby enabling
69
© 2014-2023 AnalystPrep.
A threat communi ty: T he originator of the threat or its source.
Business experts must then provide estimates for how frequent losses from these scenarios are
likely to occur as well as their magnitude. T hese estimates should take into account factors such as
To further refine outcomes, Monte Carlo simulations are run based on these inputs, which provide a
range of potential losses for each scenario. By understanding which assets may be impacted to what
extent and probability, organizations can work to bring about measures to safeguard their interests in
a cost-efficient manner.
T he Swiss Cheese Model is a popular tool for analyzing organizational defenses against failure in
health care, aviation, and other highly regulated industries. It suggests that since complex systems
cannot be safeguarded completely, it is instead important to ensure that the defenses are built up one
70
© 2014-2023 AnalystPrep.
T he model was developed by James Reason, a British psychologist, and works by visualizing these
multiple levels of defense as holes in slices of Swiss cheese. By picturing the system as a series of
slices of Swiss cheese laid side by side, the idea behind this model is that operational risks can arise
when the holes in one layer align with those in another, creating a clear path for hazards. Each layer
stands for a different barrier or defensive strategy and, when implemented correctly, creates an all-
encompassing net against unfavorable outcomes. T he idea is that when all the slices are aligned
correctly, they can cover any gaps caused by their individual weaknesses resulting in an overall
Owing to its main role in contributing to safety practices globally, the Swiss Cheese Model
unarguably features as the most significant development in risk management theory all over the
world today.
Root-cause analysis is an important tool for operational risk managers as it allows them to identify
the causes of operational issues and implement measures that reduce future risks. RCA is a
systematic approach to investigating incidents or near misses by analyzing their root causes, which is
defined as the most fundamental factor that led to the occurrence of the incident. By using this
71
© 2014-2023 AnalystPrep.
approach, the organization can improve its control processes and respond correctly in similar
In banking, root-cause analysis can be used to investigate various events. For example, if a customer
experiences fraud or theft from their bank account due to inadequate security measures, RCA can
help determine why these security measures were inadequate. T he investigation would include
looking at whether there were changes in policy or procedure that caused gaps in security;
examining technology systems and hardware components to see if they are up-to-date with current
standards; evaluating employee training programs to verify they are adequate; and assessing whether
any other external factors such as cybersecurity threats could have contributed to the issue.
Once potential causes have been identified through RCA, corrective action plans can be developed in
order to reduce future risks. For example, if it was determined that inadequate training was
responsible for weak security protocols leading to customer fraud, then steps could be taken to
improve employee training programs so that staff members understand best practices when it comes
to protecting customers’ financial information. Additionally, policies and procedures could be updated
or new ones implemented so that employees are aware of how best to respond when there is an
incident relating to sensitive customer data or funds. Banks may also need to invest in more advanced
technologies, such as multi-factor authentication, for improved security of their systems and
customers’ accounts.
T he Bowtie Tool is a visual application of RCA that can help organizations better understand the
72
© 2014-2023 AnalystPrep.
On the left side of the bowtie are direct and indirect causes, as well as preventive controls. Direct
causes are the factors that directly lead to an incident, while indirect causes represent preconditions
for an incident. T hese preconditions may include a lack of adequate training, knowledge or skills,
In the middle is where the actual risk event is identified. T his could be anything from a system error
On the right side of the bowtie are impacts – the consequences of the risk event. Impacts can range
from minor inconveniences all the way up to total catastrophic failure. Also present on the left are
detective controls and corrective controls. Detective controls allow organizations to discover any
problems quickly so they can take action before too much damage has been done. On the other hand,
corrective controls help organizations respond effectively when something goes wrong.
73
© 2014-2023 AnalystPrep.
T he Loss Distribution Approach (LDA) is an actuarial technique that has been around for some time.
It is used to divide loss data into two components: frequency and severi ty. Frequency refers to the
number of times a loss event occurs, while severity measures how costly it is when it does happen.
By decompounding each loss event into its constituent frequency and severity components, the
T his was especially beneficial in operational risk modeling since there was a severe lack of modeling
data at the beginning stages of this field. T wo risk modelers from the French bank Credit Lyonnais,
Frachot and Roncalli, wrote a groundbreaking paper on applying LDA to operational risk, which has
become commonplace ever since. T his method has been more popular in continental Europe
compared to stress testing and scenario-based modeling, which have been more commonly employed
74
© 2014-2023 AnalystPrep.
elsewhere.
In LDA, frequency and severity distributions are estimated independently, then convoluted into an
Frequency Modeling
T he Poisson distribution is among the simplest frequency distributions as it requires only one
parameter for analysis. T he parameter lambda (λ) can be used to measure both the mean and
variance in the data. When utilized correctly, this model allows for greater insight into an
75
© 2014-2023 AnalystPrep.
organization's operational risks and potential events that can occur within any given year or other
According to Basel II's 2009 study, 90% of firms use a Poisson distribution when modeling frequency
Severity Modeling
Severity is typically modeled using continuous, asymmetric, and heavy-tailed distributions. T his is to
capture the large number of small events that make up the bulk of operational risk losses, along with
a few very large ones. One popular model used to represent severity is the l ognormal
di stri buti on, which is a logarithmic transformation of the normal (Gaussian) distribution.
Moreover, Weibull and Generalized Pareto Distributions (GPD) have been increasingly employed in
recent years due to their more heavy-tailed nature. To elaborate further on this, lognormal
distributions can be used to approximate data when there is underlying variability between different
items or events within a dataset. In contrast, Weibull and GPD are well suited for modeling data
points that follow an exponential decay pattern, making them especially useful for predicting future
LDA assumes independence between the claim frequency and severity, an assumption that’s
considered too strong given the stark contrast seen in practice between the frequency of small
In addition, the assumption of independence and identical distribution (i.i.d.) within each risk class is
an oversimplification in many cases; operational risk events can be highly heterogeneous, with
significant correlations between loss amounts and various factors such as country, industry
segmentation, legal entity, business unit, or other metrics. Additionally, the use of a single data-
generating mechanism to generate all losses may not accurately represent the complexity of
76
© 2014-2023 AnalystPrep.
Extreme Value T heory (EVT ) is a statistical method of analyzing extreme events in data sets. It can
be used to model operational risk capital by examining the distribution of large values, or “outliers”,
within a data set. T his method of risk capital modeling is based on two main components: Block
Maxima (Fisher-T ippet), following the Generalized Extreme Value (GEV) distribution, and Peaks-
over-threshold (POT ).
Bl ock Maxi ma examines the behavior of maxima values which are equally spaced in time (e.g.,
maximum operational loss per period of time and per unit of measure). T his allows for the
identification and analysis of maxima patterns that occur over time and their severity level for
determining potential risk levels. In addition, this approach takes into account that some observations
may not follow the same pattern as others and thus can be omitted from the analysis if deemed
necessary.
T he second component – Peak -over-threshol d – works by focusing on observations that lie above
a certain high threshold, “u”, which is set to be sufficiently large. According to theorem 7, when the
threshold is high enough, the excess distribution F u(y) converges to the Generalized Pareto
77
© 2014-2023 AnalystPrep.
Distribution (GPD), providing an approximation of the excess under EVT assumptions.
Limitations of EVT
EVT is only applicable if a single mechanism is responsible for all the observed losses, and this
mechanism can be assumed to produce any future losses that exceed the current levels seen in the
data. T his means that EVT cannot account for fluctuations in different institutions, business lines, or
risk types, as each of these has a unique set of variables and drivers of risk.
In addition, EVT generally relies on large amounts of data in order to estimate reliable quantiles with
enough accuracy and provide meaningful results. If too little data is available, inaccurate estimates
Given the potential for operational risk disasters to have a devastating effect on a firm's revenue and
long-term earnings, as well as its reputation and impact on key stakeholders, it is essential that
organizations take proactive measures to ensure they are adequately prepared and resilient in the
face of such disasters. System failures or disruptions, cyberattacks, physical damage, and compliance
issues can all lead to operational risks, many of which may have never been encountered by an
organization before.
It is essential for a firm to understand the services that are essential to its success and those which
require high availability and resilience. A firm can do this by mapping out the entire service
architecture, including all dependencies between systems, applications, and services. T he goal should
be to gain an understanding of how each component contributes to overall operations and to identify
78
© 2014-2023 AnalystPrep.
2. Set Impact Tol erances for Important Busi ness Servi ces
Once important business services have been identified, impact tolerances can be set based on the
required performance or availability levels for each one. T hese should reflect the maximum
downtime or disruption allowed without causing a significant negative consequence for the
3. Map Important Busi ness Servi ces End-to-End and Identi fy Resources Requi red
A detailed end-to-end map of all important business services should be created showing how each of
their components interacts. T his will help to identify any needed resources to ensure successful
4. Desi gn Severe but Pl ausi bl e Scenari os to Test Vul nerabi l i ti es i n the Del i very of
Using the impact tolerances set in step two as a guide, severe but plausible scenarios should be
developed that test the vulnerability of each service when subjected to extreme conditions such as
large-scale cyber attacks or natural disasters. T his will provide an indication of any areas where
immediate action may need to be taken in order to prevent a major incident from occurring in future
operations.
5. Where Impact Tol erance Is Exceeded, Exami ne Lessons Learned from Stress
If any stress tests exceed the pre-defined impact tolerance then it is important that lessons are
learned from them and actions taken accordingly in order to improve operational resilience going
forward. For instance, if an incident was caused by a particular resource, such as hardware or
software, then steps can be taken to replace it with a more resilient alternative. Alternatively, the
attendant risk can be mitigated, if possible, through better configuration or protection mechanisms.
6. Ensure Internal and External Communi cati on Pl ans are i n Pl ace to Be Fol l owed
79
© 2014-2023 AnalystPrep.
It is also important that communication plans are in place internally between employees and
departments that might need to coordinate responses during an incident and externally with
customers, suppliers, partners, regulators, etc. who might need notification of what has happened.
Having these plans already drawn up allows both sides more time during critical moments instead of
having to create them on the fly which could lead to delays in responding appropriately.
Finally, it is advisable for firms to conduct regular self-assessments with regard to operational
resilience. T his document should include recommendations from previous testing sessions along with
proposed solutions that could address potential vulnerabilities identified therein. It should also track
progress over time, so senior management can closely monitor any trends toward improving
Scenario analysis is used to model potential operational risk disasters in order to provide expected
outcomes that serve both as an input for tail risk distributions and a benchmark for judging the
Scenario analysis helps organizations gain a better understanding of how these risks could potentially
materialize and analyze their own resilience if such situations do arise. While not every disaster can
be predicted or anticipated ahead of time, scenario analysis allows organizations to at least attempt to
anticipate various risks they could face while also attempting to come up with strategies to mitigate
them should they occur. By having these plans in place ahead of time organizations can be better
In addition, regulations requiring organizations to consider such plausible disasters also require that
firms assess their resilience plans regularly. T his is done so that firms can continue improving their
ability to withstand any future catastrophes that may arise from system failures or disruptions,
cyberattacks, physical damage, or compliance breaches. T his requires firms to take into account not
just what has happened in the past but also potential future events as well in order for them to stay
80
© 2014-2023 AnalystPrep.
Practice Question
Joel and Mark, FRM Part II candidates, are discussing BCBS’ guidelines on the need to
report comprehensive data regarding operational risk events. During the discussion, the
A. While the Basel Committee has set a minimum threshold for loss reporting at
business units that do not need to estimate a loss before deciding to report
incidents.
B. Regulatory guidelines dictate that firms must report any incidents causing them
D. Grouped losses are distinct operational risk events connected through a common
loss amount.
It is important to remember that both direct and indirect losses must be reported. Direct
losses are the ones incurred immediately after an event. Examples of such losses
include: the cost of remediation, any financial outcomes due to wrongful transactions, or
compensation to clients. Indirect losses are much trickier to identify since they are
A i s i ncorrect. Even though some banks do set a threshold of zero for operational risk
events, this strategy is fading away among large banking institutions. T his trend is
attributed to the sheer number of small incidents that must be reported with little
information value gained in return. Instead, most banks and insurance companies prefer a
threshold slightly lower than the regulatory limit. T hresholds of €20,000, €10,000, or
81
© 2014-2023 AnalystPrep.
B i s i ncorrect. Banks are only required to report any incidents causing them financial
losses. But from a management perspective, it's also good practice to record the non-
single core event or cause. For example, if an IT failure occurs, impacting various
departments in different ways, this sequence of events would likely constitute one
grouped loss.
82
© 2014-2023 AnalystPrep.
Reading 103: Risk Mitigation
After compl eti ng thi s readi ng, you shoul d be abl e to:
Describe and provide examples of different types of internal controls, and explain the
Describe methods to improve the quality of an operational process and reduce the
Explain how operational risk can arise with new products, new business initiatives, or
Identify and describe approaches firms should use to mitigate the impact of operational risk
events.
Describe methods for the transfer of operational risks and the management of reputational
According to the international standards of enterprise risk management ISO 31000, there are four
ways to address risks, labeled as "the four Ts": tolerate, treat, transfer, and terminate.
Tolerate
Tolerating risk entails accepting it and taking no proactive steps to reduce or manage it. T his strategy
is commonly used when the cost of addressing the risk is greater than the potential losses that would
be incurred if it were to occur. In such cases, companies assess the probability of risk occurrence
and the potential severity of its consequences, then decide whether tolerating it is the most
appropriate strategy for their particular situation. Companies monitor these risks on continuously to
ensure they remain within acceptable parameters and can respond quickly if needed.
83
© 2014-2023 AnalystPrep.
Treat
T reating a risk involves implementing measures to reduce or otherwise control its impact. T his may
include better staff training, investing in new technologies or systems, introducing additional
procedures and controls, or increasing oversight and governance structures. T his approach is often
seen as a more proactive way to manage operational risks since firms can identify potential hazards
Transfer
T ransferring risks involves transferring responsibility for them from one party to another to limit
exposure. T his can be done through contractual arrangements such as insurance policies, hedging
activities, outsourcing services, or joint ventures with other firms. Such strategies enable
companies to spread the financial burden associated with possible losses due to operational failures
Terminate
Terminating a risk means eliminating it from a firm's operations. T his could involve ceasing certain
business activities altogether or divesting from certain areas where specific risks cannot be managed
effectively. Companies may also decide to exit unprofitable markets after assessing all potential risks
associated with continuing operations there. Such strategies are usually taken when other options
have been exhausted, and companies feel that their financial capabilities are not sufficient enough to
Among the four ways (discussed above) to address risk, treatment is the most common risk
response, which involves risk mitigation through various control plans. Controls can be of different
classes. In this discussion, however, we will use the classification the Institute of Internal Auditors
84
© 2014-2023 AnalystPrep.
a) Preventive Controls
T hese controls reduce the likelihood of an incident occurring. T hese controls also address the
causes of potential risk events before they occur. Examples of preventive controls include
segregation of duties – different parties perform different functions in a firm, access controls, level
b) Detective Controls
Detective controls aim to alert the firm if an incident occurs to accelerate its resolution and limit
the impact of the incident on the firm or its stakeholders. Detective controls include smoke alarms,
intrusion detectors, and intrusion detection systems in cybersecurity. Credit card notifications of
potentially fraudulent transactions are an example of detective controls. Once a cause of an event is
c) Corrective Controls
Corrective controls are intended to mitigate the impact of adverse events on an institution.
Corrective controls include IT system redundancies, data backups, continuity plans, and crisis
communication strategies. Corrective controls do not affect the likelihood of a risk occurring, but
they reduce its pain if it does. When you back up your computer's data, you won't prevent the
computer from crashing, but you will reduce the pain if the computer crashes. T he use of seatbelts
d) Directive Controls
Directive controls provide guidance on how employees should handle certain situations that may
arise while they’re performing their duties at work. Directive controls can include written plans
outlining proper security measures to take when using the company’s computers, formal codes of
conduct that employees must adhere to when dealing with customers or suppliers, or protocols for
85
© 2014-2023 AnalystPrep.
A key (primary) control is a control that can sufficiently prevent risk on its own. Most of the
examples in the above figure are key controls. Key controls can be corrective if it neutralizes the
impact of adverse events on an institution (as discussed above). A non-key control, on the other hand,
can not sufficiently mitigate risk on its own. Rather, it complements the key controls.
Control Automation
Controls can be either manual or automated in nature. Automation significantly increases the
reliability of any given control, making the mitigation process much more effective. T he following
With the advancement in modern technology, banks no longer find it reliable or reasonable to rely on
manual controls. However, control automation is prone to human errors, which can transform into
technology and model risk. Examples of issues that may arise in automated systems are:
T ype 1 and T ype 2 errors, i.e., false positives and false negatives.
Automated data backup processes occur on a full server, leading to data overflow and the absence of
a backup.
Control Testing
risk mitigation measures must be tested (using control testing) to evaluate residual exposures to
86
© 2014-2023 AnalystPrep.
operational risk.
Financial risk management is shifting its focus from solely assessing risks to controls and assessing
but observing their effects is easier. Testing controls should evaluate whether controls are
Control Design
Creating an effective control design is key to reducing risk and defending against threats. Poorly
designed controls, however, are a waste of resources and provide a false sense of security. T his can
create openings for potential vulnerabilities in the system or environment, which could be
devastating if exploited. Furthermore, ineffective control designs may result in the implementation
of inadequate countermeasures that fail to properly address risks. Consequently, organizations must
focus on creating meaningful designs that provide reliable protection through adequate roles and
responsibilities, appropriate policies and procedures as well as robust infrastructure and processes.
"Opti mi sti c control s": Optimistic controls are designed without proper consideration of
the risks that they are intended to mitigate. T hey are often too focused on an ideal solution
and fail to take into account more challenging or realistic scenarios. As such, they do not
potential vulnerabilities. Optimistic controls may also be too simple and general to provide
effective protection, as they assume that a single action can protect against a wide range of
threats. Furthermore, these controls generally lack the specificity needed to accurately
assess the risk level posed by particular events or actors. Optimistic controls include
signing off large volumes of documents shortly before a deadline, accepting legal terms and
conditions online, verifying software access using printed lists of coded names without
proper explanation, and generally all signoffs and validations in which the authorizing party
"Col l ecti ve control s": Rather than relying on individual accountability for verification
87
© 2014-2023 AnalystPrep.
and quality control, collective controls seek to distribute responsibility among several
this approach, where two parties each verify the same information in order to confirm its
dilute accountability when multiple people are involved in the review process, making it
more difficult to pinpoint errors and assign blame. Additionally, placing too much trust in
collective controls may result in less stringent attention and focus from individuals, thus
important that the control chosen is tailored specifically for addressing that threat rather
than simply using what has been used in the past. “More of the same” approach typically
fails because it relies on outdated or inadequate methods which will not effectively address
current threats nor provide sufficient protection going forward. T his type of poorly
designed control can result in frequent false positives or false negatives, which will only
serve to create confusion within an organization’s security posture and further increase
T he design and implementation of appropriate controls are essential for effective risk reduction. By
carefully assessing the risks a process is exposed to and organizing tasks in such a way as to minimize
their potential impact, one can create a secure system without needing to add additional controls.
However, adding wrongly designed or untested controls can have the opposite effect, increasing the
vulnerability of the process. To ensure that risk-reducing measures are fully effective, it is
important to examine their performance once they have been put into place. Testing gives firms an
opportunity to verify that controls are being implemented correctly and functioning as intended.
Control Effectiveness
We have four primary types of control testing, presented in their level of scrutiny. T he greater the
inherent risk, the more rigorous the control testing must be.
88
© 2014-2023 AnalystPrep.
1. Sel f-certi fi cati on or i nqui ry: Given the lack of evidence provided, it is reasonable to
limit this type of assessment to secondary controls or controls related to environments with
2. Exami nati on: Written documentation of the process, as well as written evidence of the
determine the effectiveness of this testing method. In addition, it is more suitable for
automated checks and sampling of manual checks since it provides moderate assurance.
3. Observati on: It involves observing the execution of the control process in real-time so
that its design and effectiveness can be judged. T his testing control is suitable for key
controls.
testing, which involves the tester reproducing the control process on a sample of
transactions and comparing the results with those previously obtained by the process.
Examples in this category include "mystery shopping" to evaluate the quality of customer
T he independence of the testing party: In order to avoid conflict of interest and bias, the
testing party should be independent of the owner of the control process (except in the
case of self-certification).
higher risks or unstable risk environments in proportion to the severity of the risk.
Scope and sample: T he results of a test depend on the scope of testing and the size of the
sample tested. To adequately represent the population, the sample should be large enough.
89
© 2014-2023 AnalystPrep.
T his method, also called safety by design, involves applying, at the design stage, the methods and
structures that will reduce risk events. Prevention through design includes risk mitigation
Identifying slips and mistakes is the first step in categorizing human error. T he following are
fatigue. T here are many ways to respond to slips, including improving the work
Rul e-based mi stak es: T hese refer to a result of voluntary action. In other words, it is
example of such mistakes. Regulators are particularly concerned with every conflict of
interest resulting from incentive and remuneration structures that can contribute to poor
conduct.
Knowl edge-based mi stak es: T hey are the wrong choices made when someone faces a
new situation due to a lack of familiarity with a process or a lack of training and guidance.
Vi ol ati on: T his is another action that may lead to operational risk. A violation is an act of
decides to act against the rules. Violations can be mitigated through the use of either human
An improved risk and compliance culture that rewards adherence to rules and processes
Lean Six Sigma is a methodology that seeks to improve operational performance in businesses,
90
© 2014-2023 AnalystPrep.
organizations, and other areas. It combines two popular methodologies: Lean and Six Sigma.
Lean is a management philosophy based on eliminating waste and maximizing efficiency. T he goal of
Lean is to eliminate non-value-adding activities, maximize flow and reduce the time between
customer order and delivery. T he focus is on reducing lead times, improving cycle times, and
shortening throughput times. Lean techniques traditionally focus on eliminating eight kinds of
“waste.” Waste refers to various process inefficiencies associated with the underutilization of
resources, time lost, or unnecessary tasks. T he different types of waste are captured in the
mnemonic “downtime,” which stands for defects, over-production, waiting, non-used talent,
Si x Si gma focuses on customer satisfaction by achieving near-perfect quality output at every step
implementation of five phases: Define, Measure, Analyze, Improve, and Control (DMAIC). T his
approach leads to an improved product or service offering that meets customer needs with few
defects or issues.
By combining these two methodologies into one system—Lean Six Sigma—businesses can achieve
maximum efficiency with minimal defects throughout their operations. Lean Six Sigma helps
companies identify waste in their systems by mapping out processes from start to finish and looking
for opportunities to streamline them. It also uses data-driven decision-making to identify process
issues so that they can be addressed quickly and effectively. Finally, it encourages continuous
91
© 2014-2023 AnalystPrep.
improvement by focusing on identifying small improvements over time rather than large changes all
at once.
Quality Improvement
It follows the plan, do, study, act (PDSA) cycle or "Dr. Deming cycle."
Pl an is about setting goals, determining expectations, and deciding what, where, when, and
Study refers to analyzing the collected data, comparing the set targets, and evaluating
Act is about understanding lessons learned and adjusting our expectations for the coming
cycle.
Businesses face significant operational risks when they embark on new projects, products, and
initiatives that are unfamiliar to them. New Product Approval Process (NPAP) and New Initiative
Risk Assessment Process (NIRAP) are two common risk-mitigation methods. Any plan or process
that modifies or affects current business practices to achieve a business objective or solve a
problem is considered a new initiative. New initiatives might include the following:
92
© 2014-2023 AnalystPrep.
Offering new financial products, services, or activities to customers per NPAP.
Introducing new outsourcing arrangements or updating the existing ones which will also be
Development of new projects and reorganization of activities that also relate to the project
As a best practice, the owner of each new initiative should present a business case to show the
allocation of resources. A good business case covers at least five topics, namely: objective,
T he degree of the operational risk function's involvement depends on the level of risk, and mitigation
required.
T he project team itself manages typical project risks of time, budget, and delivery quality without
involving the risk function. A usual project report on project risk and execution risk is then
More mature firms would maintain a database of post-project assessments, debriefings, and lessons
learned to either benefit from past stories or avoid repeating past mistakes. T he risk function should
ensure the effective use of past data and initiate the collection of lessons learned.
In addition to the traditional risks relating to time, budget, and scope, new initiatives can modify
existing risks or create new ones by disrupting the state of business as usual. T he ORM function
should identify, assess, and mitigate all direct and indirect risks to support these new initiatives.
When projects are merged, the acquiring firm inherits the risks of the acquired assets. When a firm
acquires assets, a portfolio, or the entire entity, it inherits all risks associated with those assets,
93
© 2014-2023 AnalystPrep.
Credit risk can easily be assessed provided the data of collateral, obligors, and terms and conditions
are available. Operational risk, on the other hand, is very difficult to be assessed since it is the risk
that relates to the results of people, systems, and processes over time. T herefore, it may take time
before the inherited operational risk is discovered. Banks should therefore be very keen to assess
operational risks, especially when acquiring new assets. T he ORM function can support these new
initiatives by creating a risk profile to familiarize the management with potential operational risks
If a firm is acquired, it should be integrated to provide its own set of additional operational risks. T he
acquired firm should present customer and account platforms, payroll and management systems, and
its communications with other companies. T he ORM can help the firm identify these risks through
risk identification workshops and work with the integrating teams to set mitigation measures to
T his section reviews key operational risk impact reduction measures, including contingency
Contingency Planning
A contingency plan is simply a "Plan B" or an alternative action if the result of a future event does
not go as expected. Contingency planning is part of business continuity management (BCM), disaster
recovery plans (DRP), and corrective risk management. Contingency planning should clearly state
who does what and when in case of an event. In broader terms, contingency planning involves
BCM and DRP are particularly relevant when considering operational resilience and the capacity to
recover and adapt to incidents. BCM and DRP have been in place for decades.
94
© 2014-2023 AnalystPrep.
Business continuity management is an ongoing process designed to keep the business running in the
event of a crisis. It gives insight into where one's business is vulnerable to disaster effects. In this
case, it's about identifying the critical areas and planning to maintain the business in the event of an
incident.
A BCM structure is a manifestation of the business continuity plan (BCP), which ensures that the
plan always works. T he plan should be tested regularly for practicality and speed of implementation
in case of an emergency. BCM governance is crucial, and as such, we should have a key owner
responsible for designing actions and their execution, including communication with other parties.
T he first step in BCM is to ensure senior-level commitment. T he next step is to initiate the
management process. After this, threats and risks should also be identified and linked to the firm's
key operational risks. Once these risks have been identified, actions should be taken to manage these
risks as part of risk management. A business impact analysis is carried out to determine the terms of
risk mitigation. Strategies and plans for mitigating these risks are developed and implemented
T he business continuity plan (BCP) will be activated in the event of disruptions. A firm should
Speed: A crisis can spread very fast (e.g., cyberattacks). It is, therefore, crucial to
Competence: In a crisis, a suitable specialist should handle each recovery job. External
experts should be contracted in case such skills are not found within the firm.
Transparency: T rust of key stakeholders should be maintained by always telling the truth
and being open and honest even in the face of a large operational loss.
T he technical team assesses the risk event and restores normal processes as soon as
possible.
95
© 2014-2023 AnalystPrep.
A communications team (external or internal) to handle media and stakeholder groups.
Cri si s: After an incident, the type and scale of the problem become apparent. Examples
Emergency response: T his can last for a few minutes, several hours, or even days.
Experts must assess the situation and quickly decide how to proceed.
Recovery: If the plan goes as planned, essential operations will resume in recovery format
within the expected time frame. T here are two traditional recovery measures:
i. A Recovery Point Objective (RPO) indicates how much data will be lost or have to
determines RPOs.
ii. Recovery T ime Objective (RT O) measures how much downtime a business can
Restorati on: T his is simply bringing things back to normal. Generally, this process begins
within a few hours or days after the incident but may take longer, depending on the level of
disruption.
Risk Transfer
i) External Insurance
Generally, external insurance reduces profit and loss volatility. T he firm pays a regular premium in
96
© 2014-2023 AnalystPrep.
exchange for compensation in case of a risk event. External insurance policies for operational risk
T hey are fairly predictable, allowing for proper underwriting and pricing for the insurer,
and
It is easy to transfer both risk exposure and consequences, so risk mitigation is effective
T here is a trade-off decision between the insurance premium versus the volatility. Many firms will
tend to self-insure small losses or absorb the volatility and only seek external insurance to cover
losses from extreme operational events. Any large potential operational risk event, therefore,
In external insurance, the risk is not necessarily fully transferred, as the amount of compensation
depends on the premiums paid. In some cases, the firm may experience delays from the insurer,
ii) Outsourcing
Outsourcing involves transferring the execution of a process to a third party. By doing so, some
While FinTech banks usually manage their own technology but outsource credit risk management,
traditional banks, on the other hand, handle credit decisions on their own but outsource some of their
ICT operations.
However, outsourcing may result in third-party risk since the firm is exposed to the risk of failure of
third-party controls. Furthermore, not all risks are transferable. T he risk of accountability, for
example, is not transferred through this process. Increasingly, outsourcing is perceived as a risk-
sharing and not a risk-transfer method. Reputational damage is another risk that cannot be outsourced
97
© 2014-2023 AnalystPrep.
Operational risk controls and mitigation strategies can be implemented to protect a company's
reputation. Recall that reputational risk has been left out in the definition of operational risk. T his is
because reputational risk is not necessarily caused by operational risk. Both internal and external
One way to prevent reputational risk is to build and maintain customer confidence. T he use of
detective controls to identify operational failures and reduce their reputational effects are among the
methods used to protect against them. Detective controls include monitoring customer complaints
Rewarding good behavior and best-rating performance can also help reduce potential financial and
reputational losses. Firms should be careful when contracting third parties to avoid the wrong type
Good reputational management comprises detective, preventive, and corrective measures. In case of
stakeholders:
Reason: Explaining how and why the incident occurred and transparently identifying the
firm's responsibilities.
satisfactory solution.
reputational management process. An organization's stakeholders are not all equally important or
An interesting relationship exists between resilience and reputation: Stakeholder engagement and
dialogue contribute to building the organization's reputation capital. In times of crisis, this capital can
serve as a cushion of goodwill to help reinforce the organization's resilience to unanticipated shocks.
Robust crisis management and resilience will likely improve a firm's reputation and vice versa.
98
© 2014-2023 AnalystPrep.
Practice Question
considering various internal controls to enhance its risk mitigation efforts. T he risk
integrated into the system. Which of the following scenarios best exemplifies the use of
T he correct answer is C.
Preventive internal controls are designed to prevent errors or fraud from occurring,
while detective controls are designed to identify errors, irregularities, or fraud that have
employees are preventive controls, as they aim to prevent potential fraud or errors
from occurring.
99
© 2014-2023 AnalystPrep.
D i s i ncorrect. While encrypting sensitive data is a preventive control, cybersecurity
controls.
100
© 2014-2023 AnalystPrep.
Reading 104: Risk Reporting
After compl eti ng thi s readi ng, you shoul d be abl e to:
Identify roles and responsibilities of different organizational committees and explain how
Describe components of operational risk reports and explain best practices in operational
risk reporting.
Explain best practices for reporting risk exposures to regulators and external
stakeholders.
Decision-makers can see a firm's operational risk profile by monitoring operational risk. It helps
them to assess the efficiency of its risk management framework and offers reassurance that the
company operates within the parameters of its risk appetite. Decision-makers are alerted through
risk reporting if these boundaries are crossed and if losses and incidents are bigger or more frequent
than anticipated.
Both external and internal stakeholders use operational risk reporting. Internal audiences include the
company's executive committee, the risk committee of the board of directors, the core operational
risk function, and pertinent business lines. T he public, the regulatory bodies, and outside parties like
An organization's operational risk profile contains details about the type and degree of operational
risk exposure, prior risk incidents, risk appetite and risk indicators, risk-mitigation strategy, and
resilience measures.
101
© 2014-2023 AnalystPrep.
The Risk Committee of the Board of Directors
T he board risk committee plays a critical role in the management of operational risk within firms.
T he committee is responsible for setting a firm's risk appetite and overseeing the firm's operations
to ensure that it operates within the limits set by this appetite. In order to do so, the board risk
committee must be provided with key risk indicators associated with the firm's risk appetite, as well
as information on the frequency and severity of risk events occurring in operations. T he board risk
committee can then use this information to make decisions on how to manage operational risks and
T he executive committee will be in charge of carrying out any improvements to better manage the
firm's risk profile in accordance with instructions from the board risk committee.
T he audit committee provides an additional layer of oversight and assurance in order to protect the
organization from any potential losses that may occur as a result of operational risk. T hey are
responsible for ensuring that internal audit activities are performed correctly. T his is aside from
ensuring that any weaknesses or vulnerabilities uncovered through these audits are communicated to
In order to properly manage operational risk, the audit committee is tasked with providing input into
policies and procedure implementation and monitoring compliance with relevant regulations. In
addition, its other responsibilities are overseeing internal control systems, regularly evaluating
financial internal controls, and ensuring they meet industry standards. T he audit committee should
also conduct regular meetings with management to review ORM issues, trends, performance, and
results.
Furthermore, the audit committee should review all reports from internal audits on a regular basis to
determine if there is any evidence of fraud or non-compliance with regulations. Moreover, it should
proactively monitor emerging operational risk trends. It does this by staying informed about changes
in regulatory requirements for operational risk management and any developments in technology or
general business practices which could lead to increased operational risk exposure.
102
© 2014-2023 AnalystPrep.
Board members may hold dual roles while concurrently serving on the risk and audit committees.
T he executive committee (ExCo) is a board subcommittee comprising top executives and elected
board members. It serves as the entire board's steering committee. Its mandate includes determining
the most important issues that should be addressed by the full board and enforcing board policies.
In regards to risk management, the ExCo oversees how the operational risk management (ORM)
framework is employed. All information relevant to how this framework is used—or not—is reported
to the executive committee. T his can include data such as risk events, remediation statuses related
to culture and issue issues, and action plans which are implemented in response to these risks.
T he central operational risk function gathers all pertinent operational risk data, including information
on risk events such as risk exposures, controls, indicators, the status of action plans, and changes to
the risk profile brought on by new initiatives or developing trends, from the business lines and
summarizes it in order to create aggregated, synthesis reports for the operational risk committee and
to give the business lines feedback. Information should be evaluated, condensed, and presented in a
way that helps different audiences and stakeholder groups make decisions.
T he central ORM function's responsibility is to give an aggregated view of the many risks and event
types and their interactions, reflecting the organization's operational risk profile as comprehensively
as possible.
Coordination of the various reporting procedures by risk type and business line to produce a non-
duplicative yet comprehensive report is a crucial ORM function in operational risk reporting.
Information about operational risk is gathered at the business operations levels. It is more carefully
103
© 2014-2023 AnalystPrep.
monitored by business line managers who keep an eye on the status of their action plans as well as
the type and severity of operational risk incidents that their business lines encounter. Additionally,
this data is communicated to the central ORM function so that it may be included in the centralized
Many businesses struggle to strike the appropriate balance between having too much and insufficient
information in their risk reports. T he risk of missing critical information and hiding crucial
information increases when risk reports contain too much information, while too little information
Choosing which information to report to whom and in what format is one of the considerations in the
design of risk reporting. In most cases, high risks, near misses, and flawed critical controls will be
escalated to the next decision level without change, while other information will be summed up in
aggregated reports.
All reporting is done with the aim of identifying actionable risk mitigation solutions as well as
ensuring alignment with the firm's risk appetite. Reporting offers a window into the business
operations to make sure that objectives for risk acceptance, appetite, and mitigation are consistently
met. In contrast to other risk categories, operational risk assessment is particularly difficult because
104
© 2014-2023 AnalystPrep.
The Components of Operational Risk Reports and the Best
Practices in Operational Risk Reporting
Operational reporting across organizations is not standardized. While some businesses are more
forward-looking in their research and focus on risk outlook, key risk indicators, and action plans,
some organizations focus a lot of their emphasis on historical risk events, such as the frequency and
severity of financial losses. Generally, an organization's reporting tends to be more futuristic the
105
© 2014-2023 AnalystPrep.
A comprehensive internal ORM report has at least seven components. Depending on the audience,
T he top-10 list of operational risks is a commonly used reporting tool for company management. It
prioritizes risks based on management's strategic objectives. T his prioritization is based on the
intensity of the risk within the business environment and any residual exposure that may exist due to
an insufficiently effective control environment. For example, during the 2020 coronavirus pandemic,
organizations typically ranked risks such as employee well-being and additional hazards associated
In general, companies can use this report to keep track of potential threats associated with specific
types of operational risks. T his list provides company leadership with valuable insight into situations
that might require close monitoring or more extensive preventive measures. Furthermore, it can
help identify areas where measures could be changed or improved in order to reduce overall risk
exposure. Additionally, it serves as a useful reference point for comparison between different time
Creating and regularly updating this top-10 risk document allows for better visibility into potential
sources of harm and improved prioritization of resources towards mitigating them effectively. It also
encourages staff to discuss existing issues and develop effective solutions to reduce any potential
disruption in day-to-day operations. Finally, it allows senior management to make sound data-driven
decisions when evaluating possible solutions while remaining conscious of organizational goals and
objectives.
106
© 2014-2023 AnalystPrep.
Heatmap and Risk Register
When assessing operational risks, it is important to have a comprehensive risk register that lists the
different types of risks and their likelihoods and impacts. In order to visualize these elements in a
more efficient way, organizations often use heatmaps and risk registers.
Heatmaps are graphical representations of the identified risks and potential associated losses that
help an organization better assess, analyze, and manage its operational risks. Heatmaps also help
illustrate the overall risk profile of an organization by showing how likely each type of risk is
A risk register is an organized listing of all the different types of risks that an organization has
identified, along with their descriptions and associated likelihoods, impacts before and after controls
are applied, as well as any controls or mitigation measures implemented or planned to address each
risk. T he typical format of a risk register often includes columns such as “Risk Description”,
As part of its RCSA process, organizations create these documents to allow for better assessment
Using both a heatmap and a risk register together allows organizations to gain greater insight into the
107
© 2014-2023 AnalystPrep.
Risk Appetite Metrics
A crucial component of operational risk reporting is the tracking of risk appetite and the monitoring
metrics that go along with it. It enables the board to assess if the company is functioning within its
Risk appetite measures, often known as “risk appetite KRIs,” are measurements that show how well
a company complies with its operational risk appetite's risk limits. Risk appetite measurements are
provided as a single list when submitted to the board and senior management without necessarily
KRIs can offer a detailed analysis of risk exposure in various activities or a thorough examination of
the contributing elements to a particular risk. When it comes to data collection, KRI selection, and
reporting, many businesses have discovered that repurposing what is already known and collected in
the organization into a comprehensive set of indicators is more effective and requires less time and
Issues are another term for operational or control system problems that may or may not result in
incidents. Flags indicating lax controls, delayed action, or delays in a process are a few examples.
Issues should be classified by business line or as part of an identified emergent risk in order to make
reporting actionable.
One of the most important components of ORM reporting is reporting risk occurrences, losses, and
near misses. Many companies begin their reporting by outlining what incidents involving operational
risk occurred and how much each incident costs the company. Reports on operational risk events
should include the size and frequency of incidents, frequency and severity per period, per event
type, and business line, a trend analysis, and for larger instances that exceed a particular threshold, a
supplementary report.
Near-miss occurrences are included in reporting incidents in organizations with strong risk cultures
108
© 2014-2023 AnalystPrep.
that emphasize ORM. Organizations assess the significance of close calls based on the potential
Action plans are risk-reduction strategies created to strengthen the regulatory environment.
operational loss event. T he operations incorporate detective controls to anticipate potential issues.
Preventive action plans are developed to prevent specific operational risk events above a firm’s risk
appetite.
T he business-line owner is in charge of implementing controls, tracking action plans, and reporting
T he process of "horizon scanning" has been adopted by businesses to find new trends and potential
risks. T he board risk committee receives monthly or quarterly reports on these risks. T he majority
environment as well as regulatory risks. In accordance with best standards, horizon scanning ought
to consider factors that can alter emerging risks and draw attention to volatility changes.
Data on operational losses are especially heavily statistically biased away from the average
frequency, i.e., a relatively small number of high-severity loss events frequently account for the
majority of operational loss. T he largest losses may occur 0.5% of the time but account for 80% of
all operational losses. Risk management resources should focus on preventing and resolving major
109
© 2014-2023 AnalystPrep.
Large risk occurrences and notable near misses that exceed the organization's risk threshold are
typically discovered very fast by management and the parties involved, and they must be escalated
right away to senior management for evaluation and action. Large losses must be identified and
published separately to prevent distorting the summary data concerning minor losses.
T he majority of reported operational risk incidents involve small, frequent losses. A large number of
operational events are recorded when the incident reporting threshold is lower. T he identification
and regular analysis of small and frequent losses should be done to spot any patterns that would
indicate a control breach. T he average cost of these losses may be added to the price of services and
Reporting operational risk losses in relation to a benchmark can help management's decision-making
process become more targeted. Comparing similar entities across business units is easier when
operational losses are reported as a percentage of gross income, total costs, or total budget.
Averages are far less useful and can be deceptive when used in asymmetric distributions, such as
operational risk events. T he median and the first and third quartiles of the distribution are preferred
to averages, and they also have the benefit of being simple to show and comprehend.
A small number of outliers frequently contribute to the bias in operational loss averages. It is
preferable in this situation to eliminate these huge tail losses from the dataset used to calculate the
average and instead report each of the large tail losses separately if the calculation of an average is
required.
Information with regard to risk reporting is valuable when it deviates from the norm. Information's
110
© 2014-2023 AnalystPrep.
value is derived from data patterns, concentrated regions of distributions, and separations between
observations. Establishing a baseline of "normality" against which deviations can be more easily and
accurately discovered can be done via trend analysis over two or more cycles. T his would often be
Investigating the reality underlying the numbers and identifying what is going right and potentially
wrong in the business are two things that may be done through risk reporting. Various low, medium,
and high-risk scenarios that could change the loss profile are taken into account in a robust analysis
Unlike financial risk reporting, operational risk reporting encounters the extra difficulty of
combining qualitative data. Risk scores, color ratings, and other indications are discrete, qualitative,
and totally unsuitable for arithmetic treatment. T wo risks rated "3" (moderate) are not always
equivalent to one risk rated "5" (severe) and one risk rated "1" (low). Risk ratings stated as numbers
are no more quantitative or additive than those expressed as colors despite the fact that they reveal
information regarding ordinal ranking. T here are three possibilities to think about when combining
qualitative data:
Conversi on and addi ti on: Convert qualitative indicators into a single monetary unit that
is additive, linear, and can subsequently be arithmetically aggregated. T his strategy adds the
non-financial effects of operational risks to their financial impacts by translating them into
financial terms. T he monetization of operational risk's non-financial effects also has the
added benefit of increasing awareness of the significance of this risk type in comparison to
Categori zati on: Report risk indicators and scores by category, classifying them
according to color or score. T his gives a fair representation of the risk profile and
maintains the clarity of the reporting. T he red scores are arranged as a "candle" at the top
of the graphic to represent the idea that the longer the flame, the greater the danger.
111
© 2014-2023 AnalystPrep.
Worst-case reporti ng: A data set's worst score, such as a combination of major risk
indicators, is given as the total value, i.e., if one thing is red, everything is red. It can be
suitable if the risk tolerance is low and the data acquired is trustworthy. T his strategy has
Combined Assurance
T he goal of combined assurance is to coordinate the assurance methods used by internal audit and
outside assurance providers so that senior management and the audit committee receive accurate
112
© 2014-2023 AnalystPrep.
information on governance, risk, and control management.
T he second line-of-defense functions, which include the ORM function, among others, are the
internal assurance providers. Depending on the organization, the legal, risk management, compliance,
T he following roles should be split among the three lines of defense for combined assurance:
Fi rst l i ne: Assessment of risks and controls, testing of controls, and certification that
Second l i ne: Supervision of the risk management tasks carried out in the first line of
defense.
Thi rd l i ne: Periodic assurance actions that include internal audits in accordance with the
audit cycle.
Building a practical ORM framework requires the use of quantitative methods, including modeling.
Many businesses use scenario analysis to supplement existing operational risk techniques because of
the uncertainty of operational risk and the lack of extensive historical data patterns.
Pillar 3 of the Basel regulatory framework addresses the public disclosure of risk and financial
information.
Basel mandates that banks compute their operational risk capital using operational risk-related data
113
© 2014-2023 AnalystPrep.
I. Qual i tati ve Informati on on Operati onal Ri sk Management
Presenting the governance and risk management structures that the entity has established to manage,
mitigate, or transfer its operational risk is the goal of this section of the reporting. Companies must
disclose the structure and organization of their ORM and control function, as well as the policies,
procedures, and standards for the management of operational risk. Firms must also describe the risk
Regulated entities must provide appropriate details on the total operational losses accumulated during
the previous ten years. Each national supervisor has a set and specified reporting structure that
offers additional direction on the disclosure. While providing the information in aggregate and
excluding any sensitive or proprietary information, banks should also provide any additional relevant
T his entails disclosing the business indicator and its necessary components, which serve as the basis
for the computations of operational risk capital. In order to explain any important changes that have
occurred during the reporting period and the primary causes of those changes, regulated firms are
What is not supported by evidence is regarded as nonexistent by regulators. A risk manager's verbal
declaration won't be accepted as confirmation by a regulator; there must be proof. Risk reporting
and documentation are crucial in demonstrating to regulators and the market that risk controls and
adequate risk governance mechanisms exist. You can accomplish this by compiling the minutes from
the meetings of the governance committees, such as the board, board risk committee, and executive
committee, and by capturing the issues, conversations, and decisions in the minutes approved for the
meetings.
114
© 2014-2023 AnalystPrep.
Financial institutions are mandated in the majority of jurisdictions to alert their regulators of any
substantial operational risk events or any conduct violations. T his is in addition to alerting law
enforcers of any fraud, wrongdoing, or terrorist activity both inside and outside an institution. T he
Reputati on cri teri a: Anything that negatively impacts the company's reputation.
Resi l i ence cri teri a: Any circumstance that might compromise the business's ability to
Stabi l i ty cri teri a: Anything that might have a negative impact on the financial system.
Regulators need openness and honesty from regulated institutions regarding the status of their
operational risks.
Businesses must also provide comments about how they manage and expose their risks in their
annual reports. Operational risk is becoming increasingly important in the annual reports of financial
services firms. While avoiding excessively frightening stakeholders about potential issues,
businesses must appear open, aware of, and honest about their risk exposure. T his leads to them
To meet the demands of the market and some regulators, operational resilience reporting will soon
be added to reporting on operational risk. Some regulators will soon demand that businesses conduct
testing to ensure that they stay within the established impact tolerances for every significant
business service, make the necessary investments to allow these services to operate continuously
within the established impact tolerances, and report to the regulator on these elements.
115
© 2014-2023 AnalystPrep.
Practice Question
the following statements correctly describes the primary roles and responsibilities of
the Board Risk Committee, Audit Committee, and Executive Committee at GlobalBank?
A. T he Board Risk Committee sets risk appetite, the Audit Committee monitors
sets risk limits, and the Executive Committee oversees risk management.
statements.
T he correct answer is A.
management framework, setting risk appetite, and ensuring that the bank operates within
established risk limits. T he Audit Committee is responsible for monitoring the integrity
of financial reporting, ensuring the effectiveness of internal controls, and overseeing the
making high-level decisions, approving strategic plans, and ensuring the overall
116
© 2014-2023 AnalystPrep.
Reading 105: Integrated Risk Management
After compl eti ng thi s readi ng, you shoul d be abl e to:
Describe the role of risk governance, risk appetite, and risk culture in the context of an
Summarize the role of Basel regulatory capital and the process of determining internal
economic capital.
Describe elements of a stress-testing framework for financial institutions and explain best
Explain challenges and considerations when developing and implementing models used in
In chapters 3 to 5, we looked at the four stages of the risk management cycle: risk identification, risk
assessment, risk mitigation, and risk monitoring. In chapter 4, we looked at the different quantitative
approaches and models used to analyze operational risk, approaches used to determine the level of
operational risk capital for economic capital purposes, and the practices for assessing operational
risk and resilience. In Chapter 3, we looked at risk governance, risk culture, and risk appetite in the
context of ORM. However, in this chapter, we look at these three elements in Enterprise Risk
Management (ERM) context. T his chapter is not a repetition of what has already been covered in the
previous chapters, but it presents a wider view of risk assessment frameworks and capital
assessment in the financial sector. T his chapter is divided into three major sections:
1. An overview of ERM: T he risk management structure, risk governance, risk culture, and
risk appetite elements applied to all risks across the enterprise and how they relate to one
another.
117
© 2014-2023 AnalystPrep.
The Role of Risk Governance, Risk Appetite, and Risk Culture
in the Context of an Enterprise Risk Management (ERM)
Framework
Enterprise risk management is a holistic approach to risk management where all risks are vi ewed
together within a coordinated and strategic framework. Enterprise risk management (ERM)
organizes and coordinates a firm's integrated risk management framework. It establishes policies and
directives for managing risks across business units, provides the senior management with overall
control and monitoring of an organization's exposure to significant risks, and incorporates them into
strategic decisions. ERM provides a broader and consistent enterprise view of risk. T herefore, it
pinpoints the significant threats facing a firm's life and its core operations.
Risk governance, risk culture, and appetite guide the ERM. Risk governance defines the roles and
responsibilities of people in the three lines of defense and organizes decision-making and reporting,
usually through committees. Risk culture is all about the values and behaviors of people within an
organization. Risk appetite is about how much risk a firm is willing to take.
Risk Governance
T he three lines of defense define the roles and responsibilities for the overall risk management of a
firm.
T he first line of defense comprises the staff and management of business lines. It is responsible for
A risk owner is responsible for identifying, measuring, mitigating, and reporting risk. Risk owners are
responsible for making decisions to ensure an appropriate balance between risk and reward for the
firm. Risk owners have the authority to expose the firm to risk within the firm's risk appetite limits.
T he second line of defense is responsible for the framework and overseeing the risk management
118
© 2014-2023 AnalystPrep.
activities in the first line. T he second line is responsible for establishing risk management methods,
tools, models, and measurement methods, training the first line of defense, raising risk awareness,
developing risk management policies, and ensuring effective risk management is implemented in the
organization's activities and decision-making. T he second line of defense is also responsible for
In particular, the second line of defense comprises banks' credit risk management, market risk
management, and operational risk management departments. Also included are other oversight
functions, such as compliance or information security, and parts of hybrid functions, such as legal,
T he third line of defense oversees the risk management activities in the first and second lines. T hird-
line reviews are usually conducted by the firm's internal and/or external audit teams and may also
involve independent third parties. T he third line of defense reports independently to the board of
directors.
T he board risk committee is responsible for overseeing all risks across a firm. T his committee is
independent of the board of directors and recommends risk-based decisions, risk exposure, and risk
management to the full board. T he term of reference or a committee charter governs the operations
of this committee.
Risk Culture
As we mentioned in Chapter 2, risk culture is inseparable from corporate culture and goes beyond
the culture of alertness and reporting of operational risk incidents, as well as the sharing of lessons
learned. From an enterprise-wide perspective, corporate culture is "what happens when no one is
looking." Corporate culture includes the values, beliefs, and behaviors that all employees adhere to
under senior managers' guidance and examples. A firm's corporate culture directly influences its
attitude and preferences when managing risks, from prudent to daring, from compliant to challenging.
119
© 2014-2023 AnalystPrep.
Post-financial crisis reports emphasized that a lack of risk culture led to risk management failure in
large financial institutions. According to the seminal paper issued by the Journal of Finance in 2013,
bank holding companies with a higher lagged risk management index have lower tail risk and higher
return on assets. T his aligns with the hypothesis that a robust and independent risk management
function can reduce tail risk exposures at banks. Other signs of a lack of risk culture include money
laundering and embargo breaches. T he absence of a risk culture leads to dire consequences,
emphasizing the need for firms to establish and maintain a risk culture.
Risk culture influences the effectiveness of an ERM framework. It should be noted that the firm's
risk culture and governance arrangements reflect its risk appetite and tolerance.
Chapter 2 discussed the structure and best practices for determining a risk appetite definition and
limits for operational risk and resilience. In the next section, however, we generalize risk appetite to
Risk Appetite
Risk appetite is defined as the risks a firm is willing to take to meet its objectives. In the financial
industry, banks are willing to take financial risks. However, while pursuing their financial objectives,
firms are also exposed to other risks such as credit, market, liquidity, and operational risks.
Furthermore, these financial risks have visible return premiums, i.e., credit risk, market risk, and
liquidity premiums. However, risk-taking is limited even for these visible returns.
T he creation and implementation of a robust risk appetite framework is a crucial part of any risk
management practice. To define a company's risk appetite, one needs to come up with a document
called a "statement of risk appetite." T his document outlines and brings together the needs of all
stakeholders by acting both as a governor of risk and a driver of current and future business activity.
T he statement of risk covers all risks in both qualitative and quantitative aspects. A risk appetite
framework is, therefore, a structure that is put in place to outline a firm's approach to the
120
© 2014-2023 AnalystPrep.
In addition to managing risks, another fundamental role of ERM in financial services is to ensure the
solvency and sustainability of an institution through appropriate capital funding that covers any
unexpected losses relating to any of the main risk classes. An enterprise risk management
Economic capital.
T he Basel Committee for Banking Supervision (BCBS) was formed by the central bank governors of
the Group of Ten countries, with representatives from banks in each country. As part of its
responsibilities, BCBS sets guidelines for regulating and supervising banks in the G-10 countries and
even non-G-10 countries. T he following are the objectives of BCBS's prudential regulation of the
financial industry:
BCBS has set regulatory capital requirements to ensure the solvency and soundness of all financial
intermediaries. To achieve these last two objectives, banks must meet requirements regarding their
senior management's competence and experience. In addition, banks should monitor and report on
In July 1988, Basel I recommended a minimum level of capital equivalent to 8% of the risk-weighted
assets (RWA) to cover unexpected credit losses. In 1996, due to the evolution of financial market
activities, Basel I extended regulatory capital to market risk using a Value at Risk (VAR) approach. In
121
© 2014-2023 AnalystPrep.
2002, "Basel II" added regulatory capital for operational risk and reformed credit capital calculation
to use counterparty credit ratings. Basel regulations bear no legal ground. Instead, countries choose
to include the Basel standard through domestic laws and regulations. T he Basel II reforms introduced
Pi l l ar 1 – Regul atory Capi tal : Mandatory minimum level of capital banks require to
cover credit, market, and operational risks and the minimum liquidity ratio.
For operational risk, "Sound Principles for the Management of Operational Risk," last
updated in 2021.
T he latest reform, "Basel III," incorporated the lessons learned from the 2007-2009 financial crisis
and introduced a minimum regulatory ratio for liquidity risks. In addition to the minimum capital
In addition to meeting regulatory capital requirements, financial intermediaries must calculate their
own level of capital that reflects both their risk profile and potential needs to cover unexpected
losses. T he regulatory capital requirement may not fully reflect the firm's risk profile despite the
efforts of regulators, so it may not serve as a reliable measure of risk. T his is more evident when
Economi c capi tal is the amount of own funds (including equity and subordinated debt) a firm
estimates will be sufficient to cover unexpected losses arising from one or more risks.
Capital requirements for banks are largely determined by their credit ratings, which influence their
borrowing costs. In general, the higher the capital, the larger the buffer against losses, the better the
creditworthiness of the firm, and the lower its borrowing costs. T he firm's economic capital is
122
© 2014-2023 AnalystPrep.
calculated in the same way as a VaR based on its revenue distribution, taking into account the
A financial firm must allocate economic capital for the risks it generates for each activity it
undertakes. Capital is an expensive source of funding. In order to determine the risk-return trade-off
of their products and services, large banks calculate their RAROC, which will be discussed in the
next section.
RAROC is mostly used in credit risk. T his section will look at it from an ERM perspective. Firms
measure their profitability in the form of return on equity (ROE) or return on capital (ROC). ROC is
the return on capital divided by invested capital, which is similar to ROE except that debt is included
in the denominator. RAROC is a risk-adjusted version of ROE banks use to adjust for different lending
In contrast to ROC, RAROC adjusts net income for EL generated by risk, and the capital amount used
RAROC is more straightforward for credit activities, while EL can be estimated using historical data.
In contrast, market risk EL is less straightforward and is often set to 0. Operational risk is generally
not measured with RAROC since it is difficult to attribute explicit revenues to operational risk, and
Different levels of granularity can be used to estimate RAROC, depending on the scope of the
line can be defined as RAROC revenues. For expected losses (EL), these can be credited ELs on a
Provide a quantitative estimate of the bank's funding costs for each transaction product and
123
© 2014-2023 AnalystPrep.
type of client.
Once capital for each risk type has been identified, what follows is to assess aggregate capital needs.
Since not all risks will materialize simultaneously, diversification is allowed across various risk
classes: market risk, credit risk, and operational risk. Diversification can be of two types:
Inter-ri sk di versi fi cati on: Diversification that involves different risk classes.
To determine the risk capital for a particular business unit within a larger firm, each unit is typically
viewed on a stand-alone basis. T he assumption that each risk category follows different dynamics
could result in a low aggregated capital level compared to the sum of the stand-alone capital amounts
for each risk category. T he difference between this two makes up the diversification benefits.
T hat's because the returns correlation is likely to be less than +1. As such, the risk capital for the
firm should be significantly less than the sum of the stand-alone risk capital amount for individual risk.
Operational risk, in particular, can add diversification benefits to aggregate capital because of its low
correlation with other risk classes. It can be observed that credit and market risk correlations tend
to increase during a crisis; operational risk, on the other hand, moves independently. T his implies
that we can have large diversification benefits when operational risk is aggregated with other risks.
Stress testing requires firms to estimate expected losses under extreme economic conditions while
also considering idiosyncratic scenarios. However, the US has shifted its focus from the estimation
of capital to stress testing operational risk as well as other risks. While both economic capital and
regulatory capital are concepts of through-the-cycle, stress testing is a point-in-time process. In the
next section, we discuss the basics of stress testing in the financial industry for operational risk and
124
© 2014-2023 AnalystPrep.
Elements of a Stress Testing Framework for Financial
Institutions and Best Practices for Stress Testing
Stress testing is simply a type of testing used to determine a system's or an entity's stability. In
practice, it involves stressing that system or entity beyond its normal operational capacity, usually to
Stress tests took center stage following the 2007-2008 financial crisis. It developed as a means of
assessing the ability of financial institutions to withstand adverse events. T he idea was to identify and
report the bank's capital sufficiency to evade inherent failures. Stress tests have since become
entrenched tools to gauge the banking sector's resilience. T he emphasis on stress tests to assess and
replenish bank solvency was clarified by the fact that capital defines a bank's to weather losses and
continue to lend. Until the Great Financial Crisis, banks were limited to following the Internal Rating-
Based Approach for Capital Requirements for Credit Risk under Basel II. T hey were required to
stress test their internal rating models under different scenarios, including market risk, and liquidity
BCBS released a publication in May 2009 describing why stress testing failed during the great
1. Scenari o sel ecti on: Minor severity and missing correlations between scenarios affected
results since they could not comprehensively represent the aggregate risks across the bank.
Scenarios were undertaken at a business level and were unrelated to capital adequacy and
liquidity.
such as complex hedging strategies, were not covered under credit risk, liquidity, and
contingent risk. Furthermore, funding and reputational constraints were not tested.
3. Stress testi ng methodol ogi es: Several risk management tools employed historical
statistical relationships to assess risks. Similarly, the banking sector lacked a firm-wide
approach and focused so much on models calibrated on historical data. Historical information
revealed that the method did not consider future risk exposures.
4. Use of stress testi ng and i ntegrati on i n ri sk governance: Stress tests were not
included in a global risk framework as other businesses doubted the credibility of the
125
© 2014-2023 AnalystPrep.
analysis. Senior management was not involved enough, implying the non-existence of a
In response to the identified stress testing weaknesses, BCBS published stress testing principles
which include:
1. Stress-testing frameworks should have clearly articulated and formally adopted objectives.
3. Stress testing should be used as a risk management tool and to inform business decisions.
4. Stress-testing frameworks should capture material and relevant risks and apply sufficiently
severe stresses.
5. Resources and organizational structures should be adequate to meet the objectives of the
stress-testing framework.
6. Stress tests should be supported by accurate and sufficiently granular data and robust IT
systems.
7. Models and methodologies to assess the impacts of scenarios and sensitivities should fit the
purpose.
8. Stress-testing models, results, and frameworks should be subject to challenge and regular
review.
9. Stress-testing practices and findings should be communicated within and across jurisdictions.
Stress-Testing Taxonomy
A stress testing taxonomy helps to understand the evolution of stress testing and the range of stress
testing practices. It can also help banks appropriate strategies for stress-test planning and execution.
1. Quanti tati ve–Qual i tati ve Approach Di mensi on: Encompasses methodologies that
range from highly quantitative to highly qualitative. Quantitative approaches relate to the
see how it reacts to shocks. Qualitative approaches include scenario analysis, like macro
stress testing, as well as non-model-based evaluations, like reverse stress testing. For
126
© 2014-2023 AnalystPrep.
2. Measurabl e–Immeasurabl e Ri sk Di mensi on: Encompasses both fact-based
immeasurable risks. On the measurable end, we have approaches that assign probabilities to
Types of Stress-testing
In this chapter, we will discuss three types of stress testing, i.e., parameter, macroeconomic, and
Parameter/model stress testing involves testing the robustness of a model by changing the value of
its parameters. It applies quantitative methods to analyze measurable risks. A model parameter is
stressed to see how a model, bank, or portfolio fares under stressed conditions.
To test the financial resilience of the largest banks, macroeconomic scenarios are stressed, including
Both measurable and immeasurable risks and the dependency structure are stressed in macro stress
testing. It applies both quantitative and qualitative methods. T his test aims at understanding how
banks will fare in adverse macroeconomic conditions. T his test assumes that models produce
accurate projections, and its focus is on how changes in macroeconomic factors affect their output.
Unlike parameter/model testing, whose quantitative analysis focuses on statistical scenarios such as
a "standard deviation event," macro stress testing seeks to estimate the outcome based on a set of
macroeconomic scenarios.
Reverse stress testing usually applies qualitative methods and seeks to analyze immeasurable risks.
127
© 2014-2023 AnalystPrep.
Recall that stress testing involves generating scenarios and then analyzing their effects. Reverse
stress testing starts from the opposite end and tries to identify circumstances that might cause a firm
to fail.
By using historical scenarios, a bank identifies past extreme conditions. T hen, the bank determines
the level at which the scenario has to be worse than the historical observation to cause the bank to
fail. For instance, a bank might conclude that twice the 2005-2006 US housing bubble will make the
A reverse stress test primarily aims to assess operational resilience instead of determining the
financial resources required to weather extreme conditions. Reverse stress testing also helps banks
determine what mitigation actions and controls they need to implement and whether they need to set
up triggers for future actions if the economy or the firm itself begins to follow the path of the
scenarios explored.
Financial institutions have largely been practicing macro stress testing since the great financial crisis
shocks that far surpassed any regulatory, macroeconomic stress tests. Nowadays, operational risk
stress testing involves macro testing and parameter testing and extends beyond operational risk
quantification. Stress testing aims to understand how risk changes over time and with changing
macroeconomic conditions. T hrough this understanding, banks and regulators can project losses
Developing these stressed operational risks requires banks to establish comprehensive operational
risk stress testing frameworks that make it possible for them to forecast different macroeconomic
scenarios.
An operational stress testing framework should apply appropriate approaches, including regression
analysis, loss distribution approach (LDA) forecasting, and scenario analysis, based on the assumption
128
© 2014-2023 AnalystPrep.
T he Fed initiated the Comprehensive Capital Analysis and Review (CCAR) for the largest banks.
CCAR's primary objective is to ensure that a repeat of the 2007-2009 financial crisis is avoided by
regularly giving regulators better visibility into stress testing results of bank balance sheets.
operational risk loss forecast based on quantitative and qualitative techniques. T hese elements
include:
Expected non-l egal l oss forecast modul e: T his module consists of a quantitative
model that projects and refines a loss forecast for each risk category depending on expert
judgment.
Legal l oss modul e: T his module forecasts immaterial "bulk" litigation losses, conditional
idiosyncratic operational risk profile and bank-specific risk exposures derived from
storylines.
When developing the methodology for the model component of the expected non-legal loss forecast
module, banks have the challenge of determining whether their operational risk losses are affected
by macroeconomic factors. T his debate is yet to be settled. Some argue that operational risk is
In spite of this challenge, banks should develop a well-structured approach to linking macroeconomic
conditions with operational risk losses. Banks are unlikely to find a direct correlation between all
Banks can develop macroeconomic-based stress-testing models that model total operational risk
losses or the frequency and severity of operational risk losses. In general, banks prefer modeling the
129
© 2014-2023 AnalystPrep.
frequency and severity of operational risk losses using two methodologies:
macroeconomic conditions. Here, frequency and severity are modeled separately and
Loss di stri buti on approach (LDA): Some LDA models, e.g., frequency and severity
models, project losses based on Monte Carlo simulations. T hese models are applied under
the AMA approach for modeling regulatory and economic capital. However, they are
LDAs lack risk drivers; thus, they assume that a firm's risk exposure remains the same over time.
For this reason, traditional LDAs are preferred when regression models have failed to produce any
results.
T he above assumption of LDA does not align with the stress testing objectives, which is to
understand how an organization's risk exposure changes with time to reflect the changing
microeconomic environment and the broader operating environment. T he conditional LDA is a trade-
off between the simple LDA and a full-blown regression-based stress test. Regression is used to
model frequency, which is more sensitive to macroeconomic conditions, and its modeling is easier.
On the other hand, the severity distribution is assumed to remain constant. To stress severity, a
higher percentile of the distribution reflecting the firm's expectations for average losses per event
under stressed conditions is selected based on expert judgment. T he selected losses are then
Expert judgment and data can also be combined with conditional LDA. However, it is challenging for
conditional LDA to justify the severity percentile choice. T he 99.9th percentile used for regulatory
capital purposes is inappropriate for stress-testing purposes. A stress test aims to determine
Consequently, when the severity of losses is set at the same percentile level as capital, then a firm is
Regulators have solved this issue by removing percentile requirements on stress testing. Among the
stress testing principles, principle 4 addresses this issue – Stress-testing frameworks should capture
130
© 2014-2023 AnalystPrep.
material and relevant risks and apply sufficiently severe stresses.
Modeling operational risk severity proves more challenging than modeling frequency. When modeling
On the contrary, the severity of losses is highly affected by tail events, and therefore, modeling the
distribution of severity losses can be more complex. T he mean of severity is thus not a
comprehensive estimator, and thus this limits the ability of banks to use such an estimator. Instead,
banks can choose to use the median severity or any other appropriate approach.
As with frequency, regression analysis of average loss severity is used by some banks to estimate
models incorporating macroeconomic variables in order to account for adverse economic conditions.
Experts should refine the estimates of stressed losses using scenario analysis to ensure the model
adequately covers all material risks. T his is very useful, especially when dealing with operational
risks with little historical data or changing unpredictable risks. To refine a model, experts and risk
owners should review and challenge it to support macro drivers embedded in frequency regression.
Experts should identify and discuss any changes that might invalidate the historical loss experiences
Bank holding companies (BHCs) should estimate legal costs likely to occur under baseline and
stressed conditions. Even though legal losses are considered part of operational losses, they should
T here is a challenge associated with legal risk. Legal risk is characterized by the delay between
adverse macroeconomic conditions and legal losses suffered by banks. It may take years for business
practices that result in litigation to materialize in actual settlement losses. Consequently, forecasts
developed under this module must take into account lags between factors leading to the estimate and
actual losses.
risk profile and bank-specific risk exposures derived from storylines. T he module should be
131
© 2014-2023 AnalystPrep.
addressing identified bank-specific vulnerabilities.
132
© 2014-2023 AnalystPrep.
Practice Question
Which of the following statements accurately reflects the primary objective of the Basel
Committee for Banking Supervision (BCBS) in its prudential regulation of the financial
industry?
system.
D. To establish guidance for the regulation and supervision of banks in the G-10
T he correct answer is C.
T here are three objectives of the BCBS prudential regulation: 1) ensuring solvency and
institutions. Among these objectives, the pri mary focus of BCBS is the solvency and
Option D is incorrect because, while BCBS was established by the central bank
Governors of the G-10 countries, its guidance applies to the regulation and supervision of
133
© 2014-2023 AnalystPrep.
Reading 106: Cyber-resilience: Range of Practices
After compl eti ng thi s readi ng, you shoul d be abl e to:
Define cyber-resilience and compare recent regulatory initiatives in the area of cyber-
resilience.
Describe current practices by banks and supervisors in the governance of a cyber risk
Explain and assess current practices for sharing cybersecurity information between
providers.
With the increase in frequency, severity, and complexity of cyber-incidence, many legislative,
regulatory, and supervisory bodies were formed. For instance, the G7 came up with Fundamental
Elements of Cyber Security for the financial sector in October 2016. In the European Union (EU),
the European Commission (EC) developed the Fintech Action Plan, which championed the
T he Basel Committee on Baking Supervisory (BCBS) developed the Operational Resilience Working
Group (ORG) to address cyber risk in coordination with other international bodies. T he Committee
mandated ORG to assess the observed cyber-resilience practices at authorities and many other
firms.
T he primary objective of this chapter is to identify, describe and compare different types of
regulatory and supervisory cyber-resilience practices across different jurisdictions based on the
input of the Operational Resilience Working Group (ORG) to the FSB survey in April 2017. T his
report was publicly issued in October 2017. T he report contained cybersecurity regulations,
134
© 2014-2023 AnalystPrep.
Definition of Cyber Resilience
T he Basel Committee on Baking Supervisory (BCBS) uses the definition of cyber resilience by the
FS Lexicon as “the ability of an entity to continue to execute its purpose by anticipating and adapting
to cyber threats and other appropriate variations in the environment and enduring, containing, and
Cyber resilience expectations in many jurisdictions are based on quality and IT risk guidelines. T hese
guidelines are outlined in various regulatory standards that communicate a jurisdiction's expectations
and promote good practice. T he guidelines touch on governance, IT recovery and management,
Appropriate cyber risk management guidelines are based on information security. Sizeable
jurisdictions have issued appropriate guidance concerning information security. For instance:
In areas where specific cybersecurity regulations are absent, the supervisors encourage the
regulated organizations to implement the international standard and use prescribed guidance and
supervisory practices according to national cyber agencies' hierarchical initiatives. Some of the
Some Jurisdictions, however, develop standards that the financial sector must enforce. For instance,
the Australian Prudential Regulation Authority (APRA) is a prudential standard to ensure that the
135
© 2014-2023 AnalystPrep.
APRA-regulated organizations take measures to be cyber-resilient. Such organizations maintain
The Cyber-governance
Most regulators have instituted guidance or regulations with different levels of maturity. T hese
regulations generally touch on enterprise IT risk management. Nevertheless, the regulations do not
challenge, supervisory expectations and practices were identified and analyzed in each of the
1. Cybersecurity strategy.
5. Cyber-security workforce.
1. Cyber-security Strategy
Most of the regulators do not require organizations to develop a cyber-security strategy. However,
organizations are expected to have a board-approved information security strategy, policy, and
procedures based on the rule of effective oversight of technology. For instance, most European
jurisdictions require that the cyber risk strategy be addressed by the organization-wide risk
management framework and information security setting, which is monitored and reviewed by senior
executives.
regulatory types:
specific or across multiple industries with which financial institutions must comply. T his
136
© 2014-2023 AnalystPrep.
method is appropriate in upcoming market economies with significant homogeneity in
banking systems.
3. Examining whether financial entities possess an IT strategy and the accompanying security
A sizeable number of jurisdictions have issued guidance and requirements on the board of directors'
roles and responsibilities (BoD) and senior management. Some prioritize the BoD and senior
management in overseeing the business technology risks. However, other jurisdictions regard cyber-
governance as a risk that must be addressed in the existing risk management structures.
However, a significant number of jurisdictions recognize the importance of the roles and
responsibilities of the BoD and senior management in cyber governance and controls. For instance,
in the US, EU, and Japan, some guidelines encourage G-SIBs and D-SIBs to enforce a well-defined and
risk-sensitive management framework based on the initiatives of the BoD. Moreover, the upcoming
Most regulators have adopted the 3LD (T hree lines of defense) risk management model to monitor
the cyber-security risk and controls. T he banks must define the responsibilities without leaving any
gaps for those who do not require the implementation of the 3LD model.
T herefore, the degree of 3LD implementation significantly varies. T hus, the first and second defense
line is emphasized more than the third line of defense in almost all jurisdictions. T his draws back the
137
© 2014-2023 AnalystPrep.
3. Cyber Risk Awareness Culture
aware of the cyber risk and the existing risk culture. Most regulators in different jurisdictions have
laid down the importance of risk awareness and risk culture for staff and management hierarchies
Regulatory requirements include increasing cybersecurity awareness and other staff-related issues
in regulated entities. In other jurisdictions, regulators require incorporation of cyber training in all
phases of employment, from recruitment to the termination. Employers may require non-disclosure
clauses within the staff agreements during training sessions. Moreover, some jurisdictions may
require employees to verify their credentials at regular intervals to avoid insider threats.
In some jurisdictions, regulators determine whether banks have effective processes and controls
that ensure that employees, contractors, and third-party dealers understand their roles and
responsibilities to reduce the risk of theft, fraud, or misuse of the institution’s facilities.
Most of the regulators advocate for the establishment of common risk culture to ensure effective
cyber-risk management.
A small number of jurisdictions highlight the controls and supervisory guidance on the cyber-security
architecture. For instance, in Saudi Arabia, cybersecurity architecture is based solely on periodical
self-assessment.
T he characteristics, such as skills and competencies, regulatory framework, and other practices,
differ across jurisdictions. Some jurisdictions have unique IT standards that cover the IT
workforce's responsibilities and the information security functions specifically towards the
cybersecurity workforce and training. T he standards touch on the assessment of the team division
staff expertise, the training procedures, funding, and resource allocation to a firm's cybersecurity.
138
© 2014-2023 AnalystPrep.
Many regulators check the cybersecurity workforce through on-site inspections, where they
Generally, there exists a wide range of practices and regulatory expectations surrounding the
cybersecurity workforce. As such, there are no jurisdictions that have formulated any regulatory
expectations. In other jurisdictions, the regulatory requirements are limited to supervisory goals.
T herefore, there may be no workforce skill and training assessment by the cyber-security
supervisors. Nevertheless, countries such as Singapore and the UK have issued designated
T he approaches used to assess cyber-resilience vary across jurisdictions. However, most of the
assessment focuses on cyber risk in the context of the scale, complexity, business model, and
previous findings. Afterward, the organizations are categorized depending on the supervisory
initiatives. A supervision program is chosen while concentrating on financial and operational matters
Some jurisdictions, such as the EU, have specific guidance on when cyber-security review is
necessary. Such practices include an organization’s assessment, results from on-site inspections or
Many jurisdiction conduct both on-site and off-site reviews and inspections of regulated
organizations’ information security control. T hese review assess organizations' compliance with the
regulatory standards. T hese assessments are either done as a form of general technology or risk
139
© 2014-2023 AnalystPrep.
frameworks, controls, third-party arrangement, training, monitoring and detection, response and
T he industry's engagement aims to influence its behavior or get feedback and views on the
regulatory work. Industry engagement can be done using conferences and other methods, ensuring
the outreach of regulated entities and industry participants. Some jurisdictions incorporate third-
party service providers in the engagement through events with regulators, supervisors, industry, and
third-party services.
Most jurisdictions acknowledge the significance of mapping and classifying business services and
and regulations with an evolution of whether appropriate controls have been instituted effectively.
Penetration Testing
Cyber-security controls are executed via risk-based decisions against a regulated institution’s risk
appetite. Conventionally, the regulated entities test information security controls applied to
hardware, software, and data to prevent, detect, respond, and recover from cyber-attacks.
On the other hand, the supervisors review and challenge the regulated organizations' methods in
testing the controls and the remediation of the issues identified. T his includes reviewing the survey
response, threat and vulnerability analysis, risk analysis and audit report, and control testing reports
Some jurisdictions that have developed standardized penetration tests are the ECB, the Netherlands,
and the UK. T he tests are voluntary and funded by regulated organizations and are mostly aimed at
more significant and more systematic institutions. Most regulated tests target a regulated
140
© 2014-2023 AnalystPrep.
organization's protective and detective cyber-resilience, while others focus on the response and
recovery abilities.
reviewing these controls. Some jurisdictions utilize the taxonomies of controls to determine
whether there are gaps in their supervisory approach coverage. However, the taxonomies differ in
frameworks, business continuity management strategies, IT disaster recovery arrangement and data
Many jurisdictions require the institutions to develop a framework or prevention policy, detection,
response, recovery arrangement, and reporting threats. For instance, there is an incident
management guideline in the US. It entails identification of the source of the compromise, analysis
T he analysis of a regulated organization's incident response and recovery plans concentrates on the
initiated plans, implementation of the plans, and preservation of the data in specific actions to crucial
technology.
Some jurisdictions, such as Australia and Belgium, conduct a post-incident study by discussing the
Apart from testing, most supervisors and banks conduct training exercises and practices to prepare
for responding to an incident. After the joint exercise, a summary is published to enable others to
141
© 2014-2023 AnalystPrep.
learn.
incidents, surveys, penetration tests, and on-site inspections. T hese metrics are non-comparable to
standardized quantitative metrics for financial risk and resilience. However, they act as indicators
that provide information on the regulated entities' approach to establishing and ensuring
Moreover, the supervisory authorities can depend on the regulated entities' management
Conventionally, the regulators and the regulated institutions in different jurisdictions use
T hese indicators are usually presented to the Board of Directors and executives as part of
T he use of retrospective indicators is suitable for entities operating in a relatively stable risk
environment over time and significantly independent from external impacts. However, due to the
dynamism of cyber risk, it changes an entity's response and protective changes. Despite the
indicators as direct and indirect metrics of reliance. T he forward-looking indicators show whether an
promote the sharing of cyber-security information among banks, regulators, and security agencies, as
142
© 2014-2023 AnalystPrep.
shown in the diagram below:
T here are five types of information sharing: sharing among the banks, sharing among the banks and
regulators, sharing among the regulators, sharing from regulators to banks, and sharing security
agencies.
Sharing among the regulator is least observed because of the less regular features of the regulators'
T he information regulators and banks share may include information on cyber threats, cyber-
security incidents, regulatory and supervisory responses in case of cyber-security incidents, and
cyber threat identification. Among this information shared, information on cybersecurity incidents is
broadly observed in sharing between the banks and regulators, and security agencies. Moreover,
Some jurisdictions have established guidelines on sharing cybersecurity information for more
effective sharing by banks and regulators. However, in jurisdictions with observed information
sharing among the banks, there is less observation of information sharing from the banks' regulators
due to the current sharing model among the banks. Hence, there is no need to share information.
Simultaneously, in jurisdictions with an effective mechanism of information sharing among the banks
143
© 2014-2023 AnalystPrep.
to regulators, there is less information sharing with the security agencies due to the assignment of
responsibilities for cybersecurity information processing among regulators and security agencies in
a given jurisdiction.
Banks share information such as cybersecurity threats with peer banks through approved channels
so that peer banks can respond on time in case of a similar threat. T he regulators are not directly
involved in bank-to-bank information sharing. However, they have a role in establishing voluntary
sharing mechanism approaches for cyber vulnerability, threat, and incident information and may
A proportion of the jurisdictions have developed a public sector platform for information sharing,
while others encourage the private sector establishment of information-sharing organizations. For
instance, Brazil, Japan, and Saudi Arabia require banks to share information among the banks through
regulations and mandates. Moreover, some jurisdictions have established public or private forums or
T he extent of the information sharing and collaboration among the banks depends on the financial
Different authorities develop the reporting requirements for different reasons depending on the
144
© 2014-2023 AnalystPrep.
mandate, such as consumer protection. In almost all jurisdictions, reporting cyber incidents to
regulators is mandatory, with different levels of requirements and applications. For instance, all the
European Union's regulated entities must report the cyber incident to the competent authorities.
T he scopes and perimeters of reporting depend on the type of authority (such as national security)
and their mandate (such as banking supervision), the sectors involved, and the geographical range
(such as national level). While some supervisors concentrate on the already occurred incidents,
some require continuous monitoring and tracking of the potential cyber-threat because many
institutions might delay reporting the incidents since they want to protect their reputation.
T he reporting frameworks differ, ranging from formal to informal communications, such as verbal
Taxonomy of reporting.
T imeframe of reporting.
Templates of reporting.
Templates of reporting.
T he factors above reflect the difference between banks in different jurisdictions or supervision.
T hat is, the banks are required to fill in various types of templates with different taxonomy,
Under information sharing, the direction of the information is always from the banks to the
regulators. However, this can be changed when the regulators want to warn the entities against
incoming threats.
Regulators' information sharing is least observed across jurisdictions (except some ad hoc
145
© 2014-2023 AnalystPrep.
communication channels). However, information sharing among regulators is highly encouraged due
facilitate timely guidance to protect the banks from these fraud schemes.
Information flows from the regulators to the banks through appropriate channels, depending on the
regulator's information from banks and other sources. Some jurisdictions, such as China and T urkey,
have developed defined standards and practices that govern information sharing between regulators
and banks. Information flows from banks to the regulators in the these jurisdictions. T he regulators
then analyze the risks to the financial industry, after which they share the information the banks as
required based on the risk analysis. However, when the information contains customer-specific
T he regulators with an established regulator-bank mechanism publicly share the information through
informal channels such as sharing platforms and meetings. However, when the regulator has non-
public information, then the information is shared with appropriate participants through informal
means. T he confidentiality and anonymity of the affected organizations are maintained. Hence, the
Some jurisdictions (such as China) have made it mandatory for regulators to share information with
banks. However, others such as Singapore support voluntary information sharing by regulators.
Information sharing with security agencies involves sharing information between banks or regulators
with security agencies in a particular jurisdiction. Information sharing with the security agencies is
crucial in creating awareness of cyber threats in a timely way and improving the defense measures
against attackers.
In jurisdictions with established security agencies, the said agencies serve as the cyber threat
notification focal points. T herefore, jurisdictions have established the standards and practices of
crucial entities and regulators to share cyber-security information with national security agencies.
Some jurisdictions (such as the UK) support voluntary reporting, while others (such as Canada and
146
© 2014-2023 AnalystPrep.
France) require mandatory information sharing.
T here is no full assurance that the cyber resilience of an entity will serve it purposefully. T he
regulators experience this drawback concerning financial institutions and financial institutions
challenge to both jurisdictions and regulated entities to have a clear view of the established controls
T hird parties are taken as follows to establish a clear understanding of the practices associated with
cyber-resilience:
2. Standardized and non-standardized services and products (not considering outsourcing) such
T he link between cyber resilience and third parties discussed in the following lines:
T here exist regulations in different jurisdictions that mandate an institutions to come up with
management and board-approved outsourcing (or organizational) frameworks that outline the
following:
147
© 2014-2023 AnalystPrep.
Applicable roles and responsibilities.
Regulators may also require the institutions to enforce a contractual framework, where they should
define the generic rights, obligations, roles, and responsibilities of the institution and the service
provider.
1. Intrusive on-site inspections concerning cyber risk based on outsourcing. During inspection,
the outsourcing framework, applicable process, and the specific risk assessment and
A portion of international standards accepts that the institutions may, importantly, depend on third-
party interconnections other than outsourcing third parties. For instance, the ISO 27031 standard
states the requirements for hardware, software, telecoms, applications, third-party hosting services,
Some jurisdictions require financial institutions to sign a prior contract with their clients when they
deliver financial services through the internet. Among others, internet finacial services may include
Most jurisdictions require prior notification or approval of cloud outsourcing activities through
questionnaires or templates. T hese documents might not be similar across jurisdictions, but they
148
© 2014-2023 AnalystPrep.
provide the documents for internal risk analysis.
T he regulations and practices can be made future-proof by focusing on the products and services and
new expectations for secure development and procurement. Notably, specific requirements require
that the systems be designed based on security principles, bearing in mind that devices, applications,
T he third parties' supervision varies across jurisdictions, but the supervisor uses conventional tools
checked using on-site reviews and inspections based on the formal requirement or authority or
cooperation from service providers. In some cases, the supervisors can work directly with the
cloud providers – both formally and informally – to incorporate the right to audit in the contracts for
the financial industry or participate in the regulatory conferences organized by large cloud service
providers.
A supervisory college model can also be established to supervise and share information concerning
huge and globally active service providers such as cloud providers. Such models assist in addressing
the issues that might arise due to mandate limitations and regulatory fragmentation.
For financial institutions to protect the availability and continuity of crucial business activities in
cyber-attacks, the regulators authorize financial institutions to analyze the said occurrences.
Essentially, this helps them design and implement appropriate plans, procedures, and technical
solutions. In addition, situational analysis helps initiate adequate mitigation procedures. Moreover, for
a business that depends on third-party interconnections, the regulations require that the financial
institutions align the business continuity plans of crucial suppliers with their needs and policies based
It is a widespread practice that the regulator requires the entities to define the recovery and
resumption objectives. T he targeted activities and services are usually cloud outsourcing, settlement
149
© 2014-2023 AnalystPrep.
Plans and procedures' expectations address tasks and responsibilities of incident management,
response, and recovery in case of threats. T here is need for information and communication flow
between internal and external stakeholders. Such communication should address the needed
resources, including planned redundancy, to promote the quick transfer of outsourced activities to a
different provider if it is likely that the service provider's continuity or quality will be impacted.
Many regulators and global standards require that financial institutions frequently test protective
measures to determine if they are effective and efficient and make appropriate adjustments. Highly
established regulators expect tests for crucial activities to be based on realistic and probable threats.
Such tests should be conducted annually. Besides, service providers and essential counterparties
should be included via collaborative and structured resilience testing. Audits and monitoring activities
T he similarities in the supervisory expectations and practices in terms of business continuity and
availability are commonly seen in the entities' standalone business continuity. T hese similarities
could give an environment to extensively test continuity and resilience in a collaborative and
T he supervisory requirements on the internal/external audit of the third parties are categorized into
two:
1. A necessity for regulated entities to ensure the right to inspect and audit their service
providers. Some jurisdictions require that this right be directed to the essential
2. In some jurisdictions, the audit opinion on the outsourcing structure may be developed based
on the external auditor's report for the service provider. Some providers require that these
Despite that, the current regulations are based on conational outsourcing and maybe cloud computing
providers. T he scope of requirements for rights to inspect and audit is majorly focused on the
150
© 2014-2023 AnalystPrep.
banking sector. Shared and independent audit reporting on crucial interconnections with the third
Regarding the security expectations for outsourcing and cloud computing providers, entities must
monitor if their providers are compliant. However, most regulations do not give a method to test or
verify the extent of compliance by the providers. One of the viable methods might be bank-led or
T he confidentiality and the integrity of the information are usually stated under general data
protection requirements. To achieve this end, it is a requirement that contractual terms incorporate
confidentiality agreement and security requirements for protecting the information of a bank and its
clients. Additionally, the banks are required to maintain the cyber-resilience as per the CPMI-IOSCO
guidance. T he financial market infrastructure must design and tests its systems and processes to
resume its critical operations within 2 hours of an attack and complete its settlement by the end of
the day.
An increasing proportion of jurisdictions requires cloud service providers to ensure that the
information transferred to the cloud is based on contractual clause. Further, various cloud-specific
In some other jurisdictions, regulations require that outsourcing structures comply with the legal
and regulatory provisions on protecting personal data, confidentiality, and intellectual property.
According to Basel Committee’s Sound Practices (in their 2018 publication), banks may need
specialist competencies to determine whether their risk functions can maintain sufficient authority
In the context of outsourcing and the management process, the expectation is that the appropriate
personnel should have the required expertise, competencies, and qualifications to monitor the
151
© 2014-2023 AnalystPrep.
outsourced services/functions effectively and should be able to manage the associated risks beyond
compliance.
Regulators require the institutions to recruit sufficient and qualified personnel to ensure the
continuity in management and monitoring of outsourced services or functions even after the exit of a
significant person from the entity or otherwise absent. If an entity lacks sufficient internal
resources in know-how or number, the general requirements are that external technical resources
(such as consultants and specialists) should be hired to complement or supplement the in-house
personnel.
Similar to regulatory expectations, the supervisory practices also have commonalities in that the
human resource and qualifications for managing third-party connections and relationships are
executed in on-site inspections. In jurisdictions where the financial supervisors can directly assess
third parties, they analyze the staff's sufficiency and qualifications and require third parties to
Lastly, Certified Information Systems Security Professionals or any other institution that complies
with ISO 9001 Quality Management System provides an extra assurance staff qualification to manage
third-party connections.
Many jurisdictions require that the supervisory authority be informed concerning the material
outsourcing contracts made by the regulated organizations. Further, such jurisdictions impose
conditions such as a minimum level of visibility of the functions regulated institution outsource.
Apart from the notifications and the authorization, the regulated institutions are usually required to
preserve an inventory of outsourced functions (for example, IT assets such as computer hardware
and software) and periodic reports from service providers, majorly concerning the measurement of
service level agreements and the relevant performance of controls. In some jurisdictions, sub-
outsourcing is required to be visible for the regulated institutions to manage the associated risk.
152
© 2014-2023 AnalystPrep.
connections vary across jurisdictions. For instance, US authorities require suppliers' identification,
153
© 2014-2023 AnalystPrep.
Practice Question
improve their cyber resilience, they have recently implemented several measures,
including employee training, network segmentation, and regular security audits. T hey
also have a well-defined incident response plan and strong collaboration between the IT
threat (APT ) group infiltrated their systems and exfiltrated sensitive customer data. T he
attack remained undetected for several months, resulting in reputational damage and
regulatory scrutiny.
Which of the following actions would be the most effective for Bank X to improve its
T he correct answer is B.
security audits, but still experienced a significant cyber attack. T his indicates that these
measures alone were not sufficient to detect and prevent the advanced persistent threat
154
© 2014-2023 AnalystPrep.
traffic (Option B) would allow Bank X to identify and respond to potential threats more
Opti on A i s i sn't the most effecti ve. Increasing the frequency of employee training
sessions, may raise awareness of cyber threats among staff members, but it is unlikely to
Opti on C i sn't the most effecti ve. Focusing on strengthening network segmentation,
is important to limit the impact of a successful attack but may not prevent such
incidents.Opti on D i sn't the most effecti ve ei ther. Increasing collaboration with law
enforcement agencies may help prevent future attacks, but it does not address the need
for improved threat detection and response capabilities within the organization.
155
© 2014-2023 AnalystPrep.
Reading 107: Case Study: Cyberthreats and Information Security Risks
After compl eti ng thi s readi ng, you shoul d be abl e to:
Provide examples of cyber threats and information security risks and describe frameworks
Cyber, technological, data protection, and information security risks are routinely ranked as the top
T he term "information security" goes beyond just cyber dangers. Information may be misplaced,
stolen, or accidentally made public, as well as lost from the theft or loss of paper records and other
non-digital data. T hese dangers have many root causes and distinct mitigation strategies.
Data loss (including willful data corruption) versus data theft (including involuntary
156
© 2014-2023 AnalystPrep.
Data Incidents Theft or Corruption Loss or Involuntary Disclosure
Physical theft.
Third parties and Digital hacking,
System failures and third-party loss.
external causes cyberattacks
and phishing.
Database and backup loss.
Theft or loss Loss of company devices by employees.
of information Errors when sending documents.
Internal causes both digital
Loss of printed documents.
and physical
Accidental disclosure of information
by employee.
to outsiders.
Although the financial sector is particularly vulnerable to cyber risk due to the high value of the
One of the biggest data hacks in history was the Paradise Papers. Private information was taken in
November 2017 from the Bermuda-based offshore legal firm Appleby and supplied to a German
publication, which then shared the information with the International Investigative Journalists.
nations' offshore interests exposed them to reputational harm and public outcry.
Example 2: Equifax
One of the biggest credit-scoring companies in the world, Equifax, was the target of a cyberattack in
2017 that made the data of 147 million people public. An outside hack on Equifax servers led to the
breach. Following the release of this news, Equifax's market capitalization decreased by nearly $5
billion.
Information security still applies to data leaks caused by dissatisfied or dishonest employees. Such
occurrences are more comparable to internal fraud situations than external cyberattacks.
157
© 2014-2023 AnalystPrep.
Example 1: Data Leak at an Insurance Company
A UK insurance provider experienced a data breach that affected 500,000 clients. An employee
fraudulently copied names, dates of birth, and some contact information and offered them for sale on
the dark web. Even though the offending employee was fired, the company faced repercussions. T he
In November 2021, a developer's private keys were stolen in a phishing attack against bZx, a US-
based blockchain platform for lending and trading, resulting in a $55 million loss.
A number of market standards and advice materials are released and updated on a regular basis for
two reasons. To begin with, these market standards and advice materials assist businesses in
developing cybersecurity protection. Besides, they offer high-quality benchmarks useful for
mitigating and measuring cyber fraud and technology risks. Businesses that seek to adhere to
T he framework, which is optional, provides organizations with a summary of the best practices to
158
© 2014-2023 AnalystPrep.
T he framework offers guidelines on how to analyze threats and vulnerabilities, weigh their
consequences, and reduce the risks with specific solutions in order to help enterprises understand
their cybersecurity risks. In addition to giving direction on how to respond to and recover from
cybersecurity occurrences, the framework also encourages the use of root-cause analysis and use
of lessons learned.
T he framework's main component is a set of cybersecurity tasks that adhere to the five fundamental
processes of cyber defense: identify, protect, detect, respond, and recover. T he following
Identify
Make a list of every piece of hardware, software, and information you use, such as computers,
Create and distribute a company cybersecurity policy that details roles and duties for personnel and
anyone else with access to sensitive information, as well as precautions to take to repulse attacks
and minimize damage in the event that one does take place.
Protect
Control who accesses your network and uses your computers, other devices, and security software
to protect your data. You should also frequently back up your data, update your security software,
and have formal procedures for properly getting rid of electronic waste and devices.
Detect
Keep an eye on software, hardware (such as USB drives), and illegal employee access to your
systems. Look for any unusual behavior by your personnel or on your network.
Respond
Make and test a strategy for notifying clients, staff members, and anyone else whose data may be in
159
© 2014-2023 AnalystPrep.
danger, maintaining the smooth operation of the business, notifying law enforcement and other
authorities of the attack, analyzing and preventing an attack, and preparing for unplanned
Recover
Repair and restore damaged equipment and network components after an attack and inform staff and
Prioritized CIS measures are used to reduce the most common cyberattacks against systems and
T he CIS recommendations are useful for businesses setting up or reviewing their cybersecurity
procedures and an additional framework that can coexist with other industry-specific compliance
requirements.
160
© 2014-2023 AnalystPrep.
T he International Standard ISO/IEC 27001 gives businesses general guidance on how to set up risk
management processes for information security, as well as for its governance, policies, support, and
communication. It offers guidance on operational planning and control, risk assessment for
information security, and risk management. According to the standard, management reviews and
T he framework stipulates that an enterprise implementing ISO 27001 must have an information
security management system that systematically controls its information security risks by locating
threats and weaknesses in order to be eligible for certification. Organizations must also develop and
implement information security policies, use a continuous risk management procedure, and always
Technical safety precautions combined with suitable human actions result in effective risk
minimization. Confidentiality, Integrity, and Availability (CIA) are the three aspects of information
protection. T wo main categories can be used to classify information controls: Behavioral controls
Behavioral Controls
T hey relate to how people behave when managing and safeguarding information, and they are
applicable to all kinds of information security concerns. T hey include awareness-raising initiatives,
Technical Controls
T his is related to detection and prevention. Preventative controls are aimed at external risks and
pertain to system architecture, access, firewalls, encryption, passwords, and patching. Data
breaches can be detected early using detective measures, whether they are internal or external.
Since information security measures are costly, the advantages of risk reduction must be weighed
161
© 2014-2023 AnalystPrep.
KRIs for Information Security
Risk monitoring examines how well controls are working as well as any unanticipated departures
from the usual, such as adjustments to exposure, traffic, or employee conduct. T he IT department is
the first line of defense where all monitoring takes place. T he second line of defense is the
information security division. T his department and IT may be separated. A set of behavioral and
technical controls should be created, maintained, and monitored by the information security
In the United States, Equifax is one of the biggest credit reporting companies. It has access to credit
data for millions of people and companies. Hackers broke into Equifax's networks in 2017 by taking
advantage of a flaw in one of the systems. T he attackers took credit card accounts, names, addresses,
dates of birth, and other personally identifiable information from Equifax's data bank.
T he company's cybersecurity procedures, guidelines, and resources were old and insufficiently
managed. At the time of the attack, an audit had detected weaknesses in the patch management
process. Equifax's website had already been breached a year before the attack, exposing 430,000
names, addresses, social security numbers, and other pieces of sensitive data. T hree days prior to
the incident, an alert was sent to Equifax and communicated to 400 workers about the vulnerability
that was the basis of the hack. However, not all relevant employees were in the email list. T he
National Institute of Standards and Technology (NIST ), using the Common Vulnerability Scoring
System, gave the discovered flaws in the patch management process the highest criticality score.
Equifax made up to $700 million in fines and restitution, of which $300 million was given to the
162
© 2014-2023 AnalystPrep.
Inconsistent staff communication regarding the fix of security flaws.
Events with such high operational risk do not have a single root cause. T hey appear in weak
operating environments that are marked by numerous governance and operational flaws,
163
© 2014-2023 AnalystPrep.
Practice Question
Which of the following five guidelines T he National Institute of Standards and Technology
A. Recover.
B. Protect.
C. Identify.
D. Detect.
T he identify guideline makes a list of all equipment, software, and data a company uses. In
addition, it creates and shares a company cybersecurity policy that covers employee
repair and restoration of the equipment and parts of the network that were
compromised. It also keeps employees and customers informed of the firm's response
who logs into networks. Besides, it ensures encryption of sensitive data, regular security
update, and formulation of formal policies for safely disposing of electronic files.
164
© 2014-2023 AnalystPrep.
Reading 108: Sound Management of Risks related to Money
Laundering and Financing of Terrorism
After compl eti ng thi s readi ng, the candi date shoul d be abl e to:
Explain the best practices the Basel Committee recommends for the assessment,
(ML/FT ) risks.
customers at a bank.
Explain practices for managing ML/FT risks in a groupwide and cross-border context.
In recent years, banks have taken center stage in the management of increasingly destructive
criminal activities, particularly money laundering, and financial terrorism. Multiple banks have been
fined for their failure to identify or report suspicious transactions. T he Basel Committee has
Preventing and deterring the use of banks to launder illicit proceeds or to raise and
transfer funds in support of terrorism. T his has helped protect the reputation of banks and
T he Core Principles for Effective Banking Supervision (2012) requires banks to:
"have adequate policies and processes, including strict customer due diligence (CDD) rules to
promote high ethical and professional standards in the banking sector and prevent the bank from
165
© 2014-2023 AnalystPrep.
Assessment and Understanding of Risks
It is the responsibility of every bank to identify and evaluate money laundering (ML), and Financial
terrorism (FT ) risks it faces and subsequently develop effective defense policies. T he assessment
should sweep across all levels and business lines. At the core of this endeavor lies customer due
diligence (CDD) – a comprehensive guide on how a bank should interact with and treat its customers
to ensure that all transactions meet the required level of integrity. A bank should design policies for
customer acceptance, due diligence, and continuous monitoring of all transactions processed through
T he board of directors plays an integral role in the identification and management of various risks,
including ML and FT. As such, the board should have a clear understanding of these risks so as to be
in a position to make informed decisions. In this regard, the board should regularly be furnished with
It’s also the board's responsibility to delegate roles and responsibilities in the most efficient and
practical manner. In addition, the board should appoint a well-qualified chief AML/CFT (anti-money
laundering (AMT ) and Countering Financing of Terrorism) officer to oversee the entire AML/CFT
function.
To properly manage the AML/CFT function, there should be three lines of defense:
Business units should be charged with identifying, assessing, and controlling the ML/FT risks
inherent in their business. All the relevant personnel in direct contact with clients should be
furnished with clear policies and procedures that outline their obligations and instructions in various
situations.
Also, staff recruitment process is part of the first line of defense. All incoming staff should be
166
© 2014-2023 AnalystPrep.
screened and vetted accordingly.
T he chief AML/CFT officer should be in charge of the continuous monitoring of all ML/FT
objectives. T hey should be the face of all AML/CFT operations and the individual to interact with all
T he office of internal audit should regularly perform an independent assessment of the AML/CFT
policies and procedures and seek to find out whether such policies are being followed to the letter.
Every bank should have a monitoring system that tracks the activity of each and every account
opened at the bank. T he system should be designed such that it can detect changes in customer
Customer Acceptance Policy refers to the general guidelines banks follow in allowing customers to
Every bank should establish Know Your Customer (KYC) policies and procedures to help
establish customers' profiles and identify those that are likely to pose a higher risk.
Some of the facts that should be established at the point of contact with a customer include
167
© 2014-2023 AnalystPrep.
No accounts should be opened under anonymous or pseudo names or when the customer's
identity matches that of any person with known links to criminal activities.
Customer acceptance should not be so restrictive that it denies the general public access
to banking products.
Account monitoring should be commensurate with the level of risk. For example, a bank
should adopt enhanced due diligence when dealing with politically exposed persons or some
Due diligence should apply to customers as well as appointed representatives, proxies, and
beneficial owners.
T he best documents for verification of customer identity should be those most difficult to
used. A bank should keep copies of all the documents used in the verification process.
From the onset, it is important to establish a customer’s profile and behavior from the
moment they open the account. T hat way, any suspicious activity can be easier to detect.
Once a customer or suspicious activity has been flagged, a bank should take additional steps
to mitigate the risk of it being used for criminal activity. T hat may include freezing an
account, a review of the customer’s identity and overall activity profile, and cooperation
168
© 2014-2023 AnalystPrep.
AML/CFT in a Group-wide Context
met. Group-wide policies should be observed at the branch or subsidiary levels and still pay
In case of conflict between the group’s requirements and local/host requirements, the
latter takes precedence. It’s the group's responsibility to ensure that local policies do not
T here should be constant sharing of information among subsidiaries and the head office.
Where the minimum regulatory or legal requirements of the home and host countries
differ, offices in host jurisdictions should apply the higher standard of the two.
A bank should keep group-wide customer profiles and transaction history. All customer
A bank’s compliance department and the chief AML/CFT officer should ensure that the
group’s policies and procedures are applied across the board. T hey should also ensure that
When liaising with other banks or groups on business matters, the group should ensure
that it adheres to its own standards, particularly when the standards of the business partner
T he Committee expects supervisors to apply the Core principles for effective banking
supervision to banks' ML/FT risk management in a manner consistent with and supportive
To do that successfully, they should have a deep understanding of all the risks in their
169
© 2014-2023 AnalystPrep.
jurisdiction and their potential impact.
For higher-risk lines, supervisors should apply specialized expertise and additional
procedures to ensure effective review. T hey should come up with a supervisory schedule
Supervisors have the mandate to ensure that banks in their charge maintain sound ML/FT
risk management to protect the integrity of both the banks and the financial system as a
whole.
When monitoring groups, the supervisor should ensure compliance across all branches and
subsidiaries. T hey should also ensure that all subsidiaries comply with both group and
jurisdictional laws and that where there’s a conflict between the two, stricter law applies.
In certain situations, banks may be allowed to rely on third parties with regard to customer due
diligence (CDD). In these circumstances, the third party will most likely have an already established
business relationship with the customer. A bank can rely on a third party for the following aspects:
However, it is important to note that not all third parties are eligible for such reliance. In some
jurisdictions, banks can only rely on CDD from fellow banks and financial institutions. In certain
scenarios, the magnitude and size of transactions built upon third-party CDD may be limited.
T he third-party should be subject to the same level of supervision and regulation as the
170
© 2014-2023 AnalystPrep.
bank.
T here should be a written document acknowledging the bank’s reliance on the other
A bank should document its reliance and establish a review process for such a relationship.
A bank could request the third party to demonstrate that its AML/CFT program is as strict,
A bank must give due consideration to adverse public information questioning the third
A bank should conduct periodic checks to ensure that the third party’s CDD process is as
T he bank should reserve the right to terminate a CDD reliance with a third party if the
171
© 2014-2023 AnalystPrep.
Practice Question
with the increasing volume of cases to review. Despite having policies and procedures in
place for identifying, investigating, and reporting suspicious transactions, Bank Z has
struggled to keep up with the workload and maintain compliance with anti-money
process.
Which of the following measures should Bank Z prioritize to enhance its reporting of
suspicious transactions?
C. Amend the bank's policies and procedures to lower the reporting threshold for
T he correct answer is A.
Ongoing monitoring and review of accounts and transactions enable banks to identify
suspicious activity, eliminate false positives, and report genuine suspicious transactions
improve the efficiency and effectiveness of its suspicious transaction reporting process,
172
© 2014-2023 AnalystPrep.
ensuring compliance with AML/CFT regulations.
Option B, reallocating resources from other departments, may provide temporary relief
but does not address the root cause of the problem, which is the need for an efficient and
increase the workload and exacerbate the issue by generating more potential cases to
could compromise the quality of reports and lead to a high volume of false positives being
173
© 2014-2023 AnalystPrep.
Reading 109: Case Study: Financial Crime and Fraud
After compl eti ng thi s readi ng, you shoul d be abl e to:
Describe elements of a control framework to manage financial fraud and money laundering
risk.
Summarize the regulatory findings and describe the lessons learned from the USAA case
study.
T his chapter discusses fraud and financial crime risk management in different forms: fraud, money
Internal and external fraud are common types of operational risk banks managed long before the
introduction of ORM. Non-financial risk management comprises anti-money laundering (AML) and
counter-terrorism financing (CT F). T hese two are responsible for effective control against the risk
According to the Financial Conduct Authority's (FCA) Handbook of the UK, financial crime refers to
" any kind of criminal conduct relating to money or to financial services or markets, including any
offense involving: fraud or dishonesty; or misconduct in, or misuse of information relating to, a
Financial crime comprises internal and external fraud, money laundering, and terrorism financing.
Internal Fraud: According to BCBS, internal fraud refers to "losses due to acts of a type intended
to defraud, misappropriate property or circumvent regulations, the law or company policy, excluding
Internal fraud can be of two types: "unauthorized activities" and "theft and fraud." "Unauthorized
activities" may lead to loss of money in an organization. Indeed, it includes any intentional violation of
the law or internal policies perpetrated by a firm's employees. Examples of unauthorized activities
under the Basel event type classification include intentional non-reporting of transactions,
174
© 2014-2023 AnalystPrep.
mismarking trading positions, or the execution of unauthorized transaction types. Passwords,
On the other hand, "theft and fraud" involve the misappropriation of assets, such as extortion,
External Fraud: According to BCBS, external fraud refers to losses due to acts of a type intended
T he subcategories of external fraud are "theft and fraud" and "systems security," which involves
hacking damage and theft of information. "Systems security" is particularly becoming prominent as a
result of the increasing digitalization of financial services. Since about a decade ago, cyber and
information risk management has also evolved into a specialized branch of operational risk
Recent studies show that the COVID-19 pandemic has increased (by more than two times) the banks'
exposure to internal and external fraud. T he work-from-home program particularly led to an increase
Different countries may have different laws against money laundering and terrorism financing. In this
section, however, we use the definition of the European Union. On May 20, 2015, the European
Parliament and Council issued a directive to prevent the use of the financial system for money
laundering or terrorist financing. According to article 1 of this directive, money laundering involves
disguise the illicit origin of the property or to assist someone involved in such an activity to
ii. T he disguise of the true nature, source, location, disposition, movement, or ownership rights
175
© 2014-2023 AnalystPrep.
iii. Acquiring, possessing, or using property, knowing, at the time of ownership, that the
iv. Associating with, participating in, committing, attempting to commit, as well as aiding,
assisting, facilitating, and counseling the commission of any of the actions listed in points (i),
T he IMF defines terrorism financing as the provision or collection of funds to be used, partly or in
T his section will review the prevention and mitigation of internal fraud and anti-money laundering
practices.
Historically, the internal audit department was responsible for managing internal and external fraud
for banks. Some banks used to have "inspections," which were orchestrated by a subdivision of the
In their risk appetite framework, most firms state that they have zero tolerance for internal fraud.
T he figure below presents a framework of controls and measures to mitigate internal fraud risks.
1. Sel ecti on: Involves screening of employees and associated third parties. T he organization's
culture is also considered in this step. When firms employ people who share the same values
and ethical standards, it is easier for the firms to manage such employees. Selection is also an
2. Preventi on: T he key controls for fraud prevention are found in this step. T he rights,
authority, and access of each function must be clearly defined in order to manage fraud risk
effectively.
3. Detecti on: T ime to detection is critical in limiting the effects of an operational risk event.
176
© 2014-2023 AnalystPrep.
Detective controls are essential in internal fraud management and act as a deterrent as well:
Fraud is least likely to happen if the consequences are severe. Effective supervision and
4. Deterrents: T hese are sanctions and actions announced following any act of fraud.
Deterrents also disincentivize employees to commit fraud, thus promoting the risk-reward
balance.
External fraud management shares many of the aspects of internal fraud management. T he point of
Bank robbery, check kiting, fraudulent wire transfers, credit card fraud, and identity theft are
examples of external fraud. Misrepresentations of income, assets, and collateral values in loan
subdivide external fraud into first-party and third-party fraud. T his helps distinguish between fraud
customers commit or those a business partner commits for their own benefit from fraud committed
by an external actor, which may affect both the bank and the customer.
177
© 2014-2023 AnalystPrep.
It is, therefore, necessary for special teams to manage the different types and actors of external
fraud. For example, ensuring security is in place to secure the buildings and assets of the financial
institution against robbery. Banks also work with local authorities to handle such issues whenever
they occur.
It is common for criminals to disguise the proceeds of their criminal activities into legitimate
sources of funds in two or three phases. T he following are the three phases of money laundering:
1. Pl acement: Involves all methods intended to disguise the origins of the funds: cash transfer
to business, false invoicing, use of trusts and offshore companies, "smurfing" (keeping a bank
account or credit card under the AML reporting threshold by making a series of small
transactions rather than a single large transaction), using foreign bank accounts, etc.
2. Layeri ng: Involves different placement and extraction strategies to make tracking
3. Integrati on or extracti on: Involves getting the money out to use while evading taxes and
law enforcement through activities such as fake payments to employees, fake loans, or
dividends to accomplices.
178
© 2014-2023 AnalystPrep.
Most importantly, customers should be selected appropriately and their documents properly
scrutinized and verified (KYC, known as know your customer). Banks should also verify the origins of
funds before embarking on any business transactions to ensure that these funds are not linked to any
fraudulent activities.
Regulators recommend a risk-based approach to AML risk management. T hat is, the higher the risks,
the tighter the controls, and vice versa. Customers are categorized as low, medium, or high risk
Firms should have robust governance and a prudent money laundering risk officer (MLRO)
responsible for the management of AML. In addition, establishing written policies, training
employees, and thorough reviews can also contribute to effective AML risk management.
In its 2022 report, the FCA examines financial crime controls at challenger banks, which are fully
179
© 2014-2023 AnalystPrep.
digital and offer customers the ability to open accounts very quickly. According to FCA, there is a
T he reviews revealed some evidence of good practice, e.g., the application of technology
However, a number of weaknesses were found, which increased the risk of financial
crime during the customer onboarding process and during the customer's tenure with the
bank. In order to address the weaknesses highlighted, challenger banks should adjust their
continuous monitoring of controls to ensure they are fit for purpose in light of some
Weaknesses were also found in both customer due diligence (CDD) and the consistent
application of EDD (enhanced due diligence) in some banks, for example, in the case of
According to the UK regulator, challenger banks should adjust their oversight and control
According to the banking and compliance press, the Financial Crimes Enforcement Network
(FinCEN) and the Office of the Comptroller of the Currency (OCC) fined USAA Federal Savings Bank
(FSB) $140 million for failing to implement and maintain a BSA/AML compliance program.
Deficiencies pointed out include inadequate internal controls; detection, evaluation, and reporting of
suspicious activity; staffing; training, and third-party risk management, as well as significantly
180
© 2014-2023 AnalystPrep.
understaffed BSA/AML compliance departments.
T his is a common practice in banking, especially when many workloads are coupled with tight
deadlines. However, USAA failed to train or ensure contractors had the necessary qualifications,
It has been reported that the new transaction monitoring system implemented by USAA FSB is "too
An important lesson from this case is that heavy regulatory fines do not occur by accident: T hey
result from accumulating failures and procrastinating about implementing the necessary changes to
meet regulatory requirements. Due to the difficulty and discomfort associated with transformations,
most firms delay implementing changes in response to regulatory findings until the last minute. T his
A weak control environment can attract fines by regulators anywhere in the world, as has happened
in the US, UK, and Asia. In Asia, for example, regulators charged banks fines totaling $5.1 billion for
Banks are required to review, verify, and report suspicious activity in response to regulatory findings
and sanctions. An AML risk management framework should incorporate technology and automation
for detection and alerts as well as proper recording of false positives and false negatives. Moreover,
the COVID-19 pandemic changed customer and business behavior, particularly with the rise of
remote transactions, which makes it more difficult for financial institutions to detect anomalies.
Fraud risk management and AML are constantly changing as new opportunities present themselves
181
© 2014-2023 AnalystPrep.
Practice Question
A junior analyst is studying the USAA FSB case, in which FinCEN and OCC charged USAA
FSB $140 million for various deficiencies in their anti-money laundering (AML) and fraud
risk management practices. T he analyst notes the following points about the case:
II. USAA FSB failed to provide adequate training or ensure that its contractors had
the necessary qualifications, exacerbating the deficiencies in its AML and fraud
III. T he landscape of fraud risk management and AML is dynamic, with constant
changes driven by the evolution of fraud schemes and the emergence of new
A. I and II .
B. II only.
C. III only.
T he correct answer is D.
Statement I i s correct. USAA FSB was found to have deficiencies in various areas,
staffing, training, and third-party risk management. T he bank also had a significantly
banking industry, especially when faced with heavy workloads and tight deadlines, USAA
FSB failed to provide adequate training or ensure that its contractors possessed the
182
© 2014-2023 AnalystPrep.
necessary qualifications, further exacerbating the situation.
Statement III i s correct. Fraud risk management and AML are constantly changing
fields, as new fraud opportunities arise in response to shifting economic and business
environments.
183
© 2014-2023 AnalystPrep.
Reading 110: Guidance on Managing Outsourcing Risk
After compl eti ng thi s readi ng, you shoul d be abl e to:
Explain how risks can arise through outsourcing activities to third-party service providers
Explain how financial institutions should perform due diligence on third-party service
providers.
Describe topics and provisions that should be addressed in a contract with a third-party
service provider.
What is Outsourcing?
Outsourcing is the practice where an institution hires a third party to offer services and/or create
goods that would otherwise be taken care of in-house by the institution's own employees and staff.
Recruitment of employees.
Security operations.
Why Outsource?
Dial down and channel all in-house resources toward the core aspects of the business,
184
© 2014-2023 AnalystPrep.
spinning off operations not considered critical to outside organizations.
Handle increased demand for services during peak business periods (scalability).
On the downside, the use of service providers to perform operational functions comes with a range
of risks. Some of these risks are inherent to the outsourced activity itself, but others come up due
to the involvement of a service provider. T he use of service providers may expose financial
institutions to risks that can result in regulatory action, financial loss, litigation, and loss of
reputation. As such, it is imperative that all outsourced services are closely managed and monitored.
185
© 2014-2023 AnalystPrep.
Risks Considered before Entering and while Managing Outsourcing
Arrangements
Compl i ance ri sk s: Arise when a service provider’s products or activities fail to comply with
Reputati onal ri sk s: Arise when a service provider conducts themselves in a manner that causes
186
© 2014-2023 AnalystPrep.
the public to form a negative opinion about a financial institution.
possible economic, social, and political conditions and events in the country where the provider is
located.
Operati onal ri sk s: A service provider may expose an institution to losses due to inadequate
Legal ri sk s: Arise when a service provider exposes a financial institution to legal expenses and
possible lawsuits.
Case Studies
Raphaels Bank
In 2019, the UK's Raphaels Bank was fined £1.89 million for outsourcing failures that rendered
customer accounts inaccessible over an eight-hour period on Dec 24, 2015. Consequently, this made
3,367 customers unable to use their prepaid cards and charge cards.
T he incident happened after Raphael’s' card processor was hit by a technology hitch.
T he Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) accused the bank
of "failing to have adequate processes to enable it to understand and assess the business continuity
and disaster recovery arrangements of its outsourced service providers – particularly, how they
would support the continued operation of its card programs during a disruptive event."
A similar incident happened in 2012 at the Royal Bank of Scotland. Unbeknown to the public, the
bank had outsourced its system software from an IT vendor. T he vendor failed to follow through on a
planned system update in a timely manner. T his left millions of clients without access to their money.
T he bank itself was helpless because it had not put in place a business continuity plan in its
187
© 2014-2023 AnalystPrep.
Due to such an incident, the Federal Reserve Board issued guidelines on managing outsourcing risk.
T he guidelines require a risk management program for all banks that outsource services.
Any financial institution engaged in outsourcing should have a risk management program that is risk-
focused and provides oversight and controls that are commensurate with the level of risk the
outsourcing arrangement presents. Special attention should be given to activities that may have a
substantial impact on an institution’s financial muscle and those that pose material compliance risk.
T he complexity of risk management depends on the number (and types) of outsourced activities.
An effective service provider risk management program is built around six core elements:
A. Risk assessments.
is important to carry out a risk assessment of the activity. While at it, a financial institution
party.
business strategy.
188
© 2014-2023 AnalystPrep.
relationship.
Risk assessment should be a regular activity consistent with a financial institution’s service
should scale up (or scale down) its risk mitigation plans, if appropriate.
B. Due di l i gence and sel ecti on of servi ce provi ders: Before engaging a service provider,
it is important to exercise due diligence and objectively evaluate the provider. T he extent of
evaluation varies depending on the scope, complexity, and strategic importance of the
Financial institutions should explore the service contract and all legal issues
agreements.
Financial institutions should ensure that there’s an effective process to review and
contracts.
189
© 2014-2023 AnalystPrep.
E. Oversi ght and Moni tori ng of Servi ce Provi ders:
performance levels.
F. Busi ness Conti nui ty and Conti ngency Consi derati ons:
A financial institution should ensure that there are contingency and business
continuity plans.
As hinted earlier on, the due diligence process is built around three key elements:
As a first step, the financial institution should seek answers to each of the following
questions:
program?
T he institution should proceed to engage the provider if and only if it obtains satisfactory
190
© 2014-2023 AnalystPrep.
Ideally, the service provider should be experienced and well-qualified to deliver the service.
It is also important to scrutinize the service provider's business model, including its business
strategy and mission, service philosophy, quality initiatives, and organizational policies. T he
provider’s business model should be resilient and adaptable to a range of potential business
record.
Verify whether there are ongoing legal matters involving the service provider or
its principles.
It’s imperative to scrutinize the financial condition of the service provider and its closely-
Sustainability by looking at the length of time that the service provider has been in
How the proposed business relationship will impact the service provider's
financial condition.
subcontractors.
T he provider’s other current issues that may materially affect future financial
191
© 2014-2023 AnalystPrep.
performance and/or existence.
service providers comply with applicable laws. Such services should also be
Internal controls.
Facilities management.
Financial institutions should explore the service contract and all legal
any agreements.
192
© 2014-2023 AnalystPrep.
unambiguous manner.
Scope
T he rights and responsibilities of each party should clearly be spelled out. T hat includes
Terms governing the use of the institution's property, equipment, and staff.
Contract timeframes.
T he contract should specify the party responsible for the settlement of various costs that
may emanate from the contractual arrangement, including legal, audit, and supervisory
examination costs.
T he contract should also specify the party that’s responsible for the purchase and
T he financial institution should ensure that the compensation structure does not
Right to Audit
193
© 2014-2023 AnalystPrep.
T he agreement may give the financial institution the right to audit the service provider or
give the institution (or its proxies) access to the provider’s financial statements.
T he agreement should specify measurable performance standards for the service or product.
T he contract must contain extensive provisions that address the confidentiality and
In order to keep crucial information confidential, the contract may specify the type of
T he contract should also require the service provider to disclose any data breaches in a
timely manner.
T he contract should specify instances when the service provider is allowed to use the
In addition, there should be clarity on the ownership of data produced by the service
provider.
If there’s any software purchase from service providers, the financial institution should
ensure that there are escrow agreements that allow it to access the source code and
Indemnification
Agreements should be set out in a way that allows the financial institutions to seek
194
© 2014-2023 AnalystPrep.
indemnification from the service provider if the latter’s negligence leads to claims against
the institution.
Events that constitute default should clearly be spelled out, including a list of acceptable
Dispute Resolution
T he contract should set out dispute resolution and escalation mechanisms to ensure that
Insurance
T he financial institution should seek proof of insurance and seek notification whenever
Customer Complaints
Agreements should give details regarding the responsibilities of both parties in case there’s
a customer complaint.
Agreements should address the steps that service providers should take to restore
T he financial institution should require the service provider to back up data and maintain
195
© 2014-2023 AnalystPrep.
Foreign-based Service Providers
A contract should attempt to specify the law that should be applied in dispute resolution
Subcontracting
If the agreement allows for subcontracting, the subcontractor should be subject to the
196
© 2014-2023 AnalystPrep.
Practice Question
associated with outsourcing. T hey have sought the assistance of the bank's legal counsel
Which of the following elements should Bank X ensure is included in the contract with
A. T he contract should include a clause stating that the service provider is not
party, leaving the details to be agreed upon verbally between the parties.
costs, with less emphasis on ensuring the quality of the outsourced services.
T he correct answer is C.
A well-defined contract should clearly define the rights and responsibilities of each party,
including the scope of services, compliance with applicable laws and regulations, and
allowed, if at all.
Option B is not advisable, as a well-defined contract should clearly outline the rights and
197
© 2014-2023 AnalystPrep.
responsibilities of each party in writing, rather than relying on verbal agreements.
Option D is not in line with the best practices, as a well-defined contract should focus on
defining the scope and quality of services, rather than merely minimizing costs.
198
© 2014-2023 AnalystPrep.
Reading 111: Case Study: Third-Party Risk Management
After compl eti ng thi s readi ng, you shoul d be abl e to:
Explain how risks related to the use of third parties can arise and describe characteristics
Describe the lessons learned from the case study involving a data breach a third-party
T hird-party risk management (T PRM) is the identification, assessment, mitigation, and monitoring of
the risks associated with the usage of third parties, such as contractors, suppliers, service providers,
To cut costs, financial firms frequently use third parties to provide services. T hey contract out a
wide range of functions to specialized businesses with a competitive edge in certain systems,
procedures, and expertise. Increased third-party risk exposure does, however, provide new
challenges.
Risks fourth parties pose can also be managed as part of third-party risk management. T hese are
Service interruptions, poor service quality, fraud, unintentional or intentional information leaks,
compliance violations, espionage, IP theft, and reputational harm are common third-party risks. Over
time, the risk of "third party failure" has become more significant, constituting a level 1 category of
operational risk for many institutions and the focus of specialized risk management techniques.
Financial companies are increasingly outsourcing core operations. T hese businesses usually have
personnel who reside in countries other than the one where the contracting bank is situated. Along
with country risk, outsourcing to a business in another legal system also carries compliance and legal
risk.
199
© 2014-2023 AnalystPrep.
Information security management, data privacy risks, and business continuity management all have a
lot in common with T PRM. However, T PRM continues to be challenging for many firms.
In the past, regulators have established guidelines to assist banks in managing third-party risk.
between corporations has resulted in a flurry of newly suggested and proposed laws and guidance.
Shared Assessments, a US-based certification body that specializes in T RPM and provides third-party
risk management specialists with professional certification, has proposed the T PRM life cycle
200
© 2014-2023 AnalystPrep.
I. Business Model Decisions
Important strategic choices that relate to a firm's risk appetite include the choice of whether to
outsource or keep some tasks in-house and the choice of the quality and cost of the supplier.
A crucial element in T PRM is third-party evaluation and due diligence. Using a proportional approach
201
© 2014-2023 AnalystPrep.
to risk management is a good strategy. Due diligence and verifications necessary for an IT cloud
provider responsible for hosting sensitive data are different from those required for an external
More businesses are using standard assessment questionnaires to streamline and standardize due
In third-party management, contracts, service level agreements (SLAs), and contract management
T he contract's terms should specify where, when, and what can and cannot be done. Besides, it
should clarify what the players can and cannot do. Contracts (also known as Service Level
Agreements, or SLAs), which determine the quality and timing of the supply of services, outline the
obligations and expectations of both parties. One of the common causes of third-party relationship
failure and operational risk increment is a lack of awareness or ambiguity in either party's
expectations. For this reason, it is advisable to clearly express these expectations in the contract,
and an SLA, which should, wherever possible, include precise and quantifiable quality measures.
Foreign-based service providers must also be aware of and abide by any local legislation that may be
in force.
Contracts are only enforceable if both parties involved have signed them and only after all
outstanding issues have been evaluated and resolved. Establishing the provisions of a periodic
contract review to ensure compliance and a repair process to address contract shortfalls are both
recommended.
A firm can manage third-party risk through contract terms. It's best practice for a contract to set
may largely be copies of the guidelines a company used for its own vendors. It is essential for
companies to have the ability to audit their vendors. T his is part of continuous monitoring.
202
© 2014-2023 AnalystPrep.
T hird-party relationship management (T PRM) includes monitoring service provision, quality SLAs,
and compliance with the law and contract terms. T he relationship agreement needs to be reviewed
for continuation or modification. T he shorter the reassessment and reviewing cycle, the more
thoroughly the first four steps of the T PRRM life cycle are implemented. Determining reassessment
trigger events is a good strategy and not simply set for end-of-contract or scheduled period
assessments. Reassessment triggers can be particular occurrences such as data breaches, changes in
business circumstances such as mergers, breaches of the contract, acts of God, or changes in the
regulatory environment.
It is advantageous for businesses to have an exit strategy in case things do not go as planned.
V. Remediation or Termination
T hird-party partnerships end when the services are no longer provided; this is the usual reason for
termination. Every contractual partnership should, however, have a grievance mechanism as well as
an exit strategy or termination clause in case the relationship worsens beyond repair.
A third-party relationship might cease for a variety of reasons. It is noteworthy that in a third-party
relationship, fairness is important. For this reason, both parties should prepare for a wind-down
process that includes the transfer of intellectual property (IP), a plan to move to in-house services
or another service provider, and to show that any sensitive data that the third-party vendor held has
A US bank called Capital One, which offers credit cards, auto loans, checking accounts, and savings
accounts, disclosed that information on roughly 100 million Americans had been accessed unlawfully
in 2019.
A former employee of Amazon Web Services, a provider of cloud computing services, was finally
203
© 2014-2023 AnalystPrep.
arrested by federal authorities and charged with hacking into the bank's system and obtaining 140,000
T he attacker successfully accessed the files kept in an Amazon Web Services (AWS) database by
taking advantage of a poorly configured web application firewall flaw. Encrypting the data to render it
unusable for attackers would have been an appropriate remedy for this vulnerability. However, the
In order to ensure that there were no additional instances of inappropriate setups in its environment,
the bank fixed this issue. Even while the largest cloud service providers integrate excellent security
into their solutions, risk management, monitoring, backups, and maintenance are an organization's
responsibility.
A US banking regulator penalized Capital One $80 million after finding that the bank had failed to set
up efficient risk assessment procedures before moving important IT operations to the public cloud.
T he bank had a history of failing to recognize or correct its weak control environment.
US banking regulators fined Morgan Stanley $60 million for negligence in risk management and
control of vendors during the decommissioning of two wealth management company data servers, as
According to the regulator, the bank failed to sufficiently assess the risks of engaging third-party
vendors, ineffectively assessed the risk of decommissioning its hardware, and failed to keep an
T he regulator claims that the bank neither chose the third-party contractors carefully nor
appropriately assessed their performance. Upon discovering that some of the personal data from
computers the bank had disposed of still remained on them, the regulator directed the bank to inform
potentially impacted clients that their data might have been compromised. T he bank gave the
T his instance demonstrates that while a firm may transfer some operational risks associated with an
activity through outsourcing, it cannot transfer the responsibility for those risks.
204
© 2014-2023 AnalystPrep.
Practice Question
Which of the following steps of the third-party risk management life cycle involves
keeping track of service provision, quality SLAs, and compliance with the regulation?
A. Remediation or termination.
D. Contract monitoring.
Continuous monitoring involves keeping track of service provision, quality SLAs, and
compliance with regulations as well as with the terms of the contract. It also involves
repair.
consideration of a service provider’s quality and price. T hese are important strategic
C i s i ncorrect. Evaluation, risk rating, and due diligence involve sound due diligence and
management practice where there is need for extensive due diligence for third parties
that will have access to sensitive information compared to those that will not.
205
© 2014-2023 AnalystPrep.
Reading 112: Case Study: Investor Protection and Compliance Risks in
Investment Activities
After compl eti ng thi s readi ng, you shoul d be abl e to:
Describe and provide lessons learned from the case studies involving violations of investor
T he phrase "investor protection" here refers to laws and rules designed to make sure that investors
Both the EU and the US have regulations and laws pertaining to investor protection. Violations of
investor rights have made some banks and investment firms pay mind-boggling fines and penalties.
Clients, Products, and Business Practices (CBPB), the fourth event category of the Basel taxonomy
for operational risk, is typically where compliance with financial rules and regulations lies.
Compliance with investor protection laws typically falls under subcategory 4.2: "Improper Business
MIFID
T he Markets in Financial Instruments Directive (MIFID) is a 2004 EU directive that has been in
effect throughout the EU since November 2007. It aims to provide investors in financial instruments
206
© 2014-2023 AnalystPrep.
with a high level of protection.
MIFID specifies organizational and business practices for investment firms as well as authorization
requirements for regulated markets. In order to prevent market abuse, it establishes the
T he European Commission updated the MIFID framework in 2014, creating "MIFID II" and "MIFIR"
(Markets in Financial Instruments Regulation). MIFID II added new requirements for the public
disclosure of trading activity data as well as for the disclosure of transaction data to supervisors and
regulators. T he MIFIR regulation addresses the incentive systems as well as other facets of financial
Dodd-Frank
T he Dodd-Frank Wall Street Reform and Consumer Protection Act in the United States was
implemented as a regulatory response to the 2007 financial crisis in an effort to avert its
Formed a committee to engage with the Securities and Exchange Commission (SEC)
207
© 2014-2023 AnalystPrep.
regarding regulatory priorities surrounding new financial products, fee structures, and
trading methods.
stability.
Established the Volcker Rule that sought to stop commercial banks from profit-driven
Unintentional compliance breaches are frequently brought on by poor policies or human error.
Many of the measures taken to stop internal fraud also work to prevent errors. T he intention of the
T he same factors that contribute to internal fraud also contribute to market abuse risk. T hese
include:
T he asymmetry in information between buyers and sellers. Compared to banks and asset
208
© 2014-2023 AnalystPrep.
T raders' conflicts of interest when they trade for the company and for their clients'
books.
Economic factors like spikes in market volatility boost the volume of transactions, which
Effective measures for ensuring that investment activities are carried out properly include:
Employee education.
A culture of ethics that is robust and is maintained by regular onboarding and training.
According to US fine statistics for investor protection violations, 6,612 penalty records totaling
close to $82 billion in fines and penalties were recorded between 2000 and 2022.
As part of the settlement of a lawsuit in which UBS was accused of "misrepresenting auction rate
securities to investors as safe, cash-equivalent products when in reality they faced increasing
liquidity risk," the harshest individual penalty was imposed on UBS. T he firm was directed to buy
JP Morgan currently holds the record for paying the biggest fine for spoofing. In this context,
spoofing is the practice of quickly submitting and canceling orders to create the appearance of
market demand and so manipulating prices in one's benefit. JP Morgan was fined $920 million by the
CFT C.
209
© 2014-2023 AnalystPrep.
T hese case studies demonstrate how perceptions and laws have evolved over time. Prior to the
creation of investor protection laws, techniques such as spoofing were considered commonplace.
under the SEC's supervision and controls brokerage firms. It is committed to maintaining market
T he duty for businesses to look for the most advantageous terms reasonably available for a
customer's orders is FINRA Rule 5310. Between 2014 and 2018, FINRA discovered an anomaly in
Deutsche Bank Securities' transactions. T he bank was delaying the execution of customers' market
orders and lowering fill rates by routing customer orders to exchanges through its smart order
router before routing any part of the order to an exchange unless customers opted out of this
routing preference. T his was known as the “SuperX ping.” According to the firm's filings, it's
possible that this behavior led to trade rebates being obtained by the company.
Furthermore, Deutsche Bank Securities failed to disclose this to the markets to which it routed
orders. T hough it neither disputed nor accepted FINRA's allegations, Deutsche Bank Securities
Regulators typically impose harsh fines in cases of a breach where businesses continue non-
compliant practices for profit, offsetting any benefits accrued. T he penalties are intended to serve
as a deterrence against additional deviations and to encourage other organizations to modify their
210
© 2014-2023 AnalystPrep.
Practice Question
A. Employee education.
T he correct answer is C.
Dodd-Frank established the Volcker rule which seeks to stop commercial banks from
profit-driven speculation and proprietary trading aside from limiting banks’ investments in
B i s i ncorrect. Best deal execution for the clients is one of the issues that the Markets
211
© 2014-2023 AnalystPrep.
Reading 113: Supervisory Guidance on Model Risk Management
After compl eti ng thi s readi ng, you shoul d be abl e to:
Describe model risk and explain how model risk can arise in the implementation of a model.
Explain the best practices for the development and implementation of a model.
validation process.
mathematical techniques and assumptions to analyze input data into quantitative estimates. A model
i. A data i nput component: T his delivers assumptions and data to the model.
iii. A reporti ng component: T his translates the estimates into useful business information.
212
© 2014-2023 AnalystPrep.
Model Risk
Model risk is the likelihood of unfavorable consequences arising from decisions based on incorrect
or misused model outputs and reports. T he invariable use of models will always present these risks.
Model risks can lead to financial losses, poor business, and strategic decisions, or even damage to a
213
© 2014-2023 AnalystPrep.
bank's reputation. T he following are some of the model-related risk types:
i. Data: A model requires data in various stages. Data can be used in the model development
stage or even in the usage stage. T he data used may, however, be incomplete, corrupt, or
erroneous.
ii. Impl ementati on: T here may be model errors as a result of incorrect or incomplete
implementation.
iii. Stati sti cal : T here are various uncertainties as a result of the chosen methodology. An
iv. Parameters: Risk may be presented as a result of limitations or uncertainties due to invalid
v. Cal i brati on: T here may be errors resulting from the incorrect or inaccurate fitting of
viii. Inventory: Risk may arise due to incomplete or inaccurate model inventories, the use of
Model risk may arise during the implementation of a model. In other words, the implemented model
may stray from its design, making it different from what the designers and users believe it to be. T his
i. Wrong or di fferent i nputs: T his may result from obtaining data for a key factor from a
ii. Wrong or di fferent mathemati cal formul as: T his results from simple typing errors or
alters it.
iii. Mi srepresentati on of outputs: T his may result from the juxtaposition of, say, two
Implementation error may result in a good model being implemented as a bad model. If the
214
© 2014-2023 AnalystPrep.
implemented model deviates from its design, then it is a different model from what the designer and
users believe it to be. T his may result in unpredictable outputs. Implementation error is generally
occasioned by human error. With extensive software projects, some coding or logic errors are
inevitable.
T he board of directors and senior management approve model risk governance at the highest level.
T hey do this by establishing a bank-wide approach to model risk management. A bank's board and
senior management need to build a strong model risk management framework that fits into the
organization's broader risk management. T hat framework should include model development,
implementation, use, and validation standards. A model risk function that reports to the Chief Risk
Officer (CRO) should be established. Besides, the model risk function should be responsible for the
model risk management (MRM) framework and governance. T here is a need for an independent
Managing a model's lifecycle requires consideration of various factors to maintain its quality. T his
involves understanding the model development, documentation, validation, inventory, and follow-up.
Banks should keep a complete inventory of every existing model to facilitate MRM and keep a record
of all uses, changes, and approval status. Documentation of model development should be sufficiently
detailed to enable a proper understanding of how the model works, its limitations, and the key
assumptions. Documentation should, at least, include the following: data sources, model methodology
report, model calibration report, test plan, user manual, technical environment, and operational risk.
Independent model validation ensures that the model performs as expected, meets the bank's needs,
and meets regulatory requirements. Model validation covers the following: model purpose, model
design, assumptions and development, performance, usage, and end-to-end model documentation,
among others. No conflict of interest should arise during the model validation process. Banks should
215
© 2014-2023 AnalystPrep.
have a decision-model performance monitoring system that allows for early detection of deviations
from targets and for remedial or preventative action to be taken. A comprehensive follow-up may
examine the following: statistical model, decision strategies, and expert adjustments, among others.
Model Infrastructure
Optimized processes and technological platforms should support the MRM framework. T he
T here should be good documentation of the interconnectivity between the source data systems,
analytical engines, and reporting platforms, with clearly stated rationale for platform choices. For the
underlying systems, the completeness, accuracy, and timeliness of the data that feed models should
be a top priority. T here should be a clear separation between input, model, and output data. T he
choice process between the internal and vendor systems should be clearly articulated.
Additionally, there is a need for a robust vendor selection process undertaken independently so that
the bank can consider how the systems fit within the current infrastructure. Some financial
institutions are changing from vendor solutions to open-source modeling platforms, such as Python,
Julia, and R, in areas such as machine learning and derivatives pricing, among others. T he motivation
for this is transparency, speed, cost, flexibility, and audibility. Models need to pass regulatory tests
T his assessment involves the periodic activities that occur after a model has been approved for use.
T his helps verify the appropriate usability and functionality of the model. After the model is validated
and approved, it can be used officially for internal reporting within a bank. Once the validation is done
and the regulation test completed, there is a risk of complacency. T here are potential errors that can
i. The model can reach i ts theoreti cal l i mi tati ons: Models have performance limits. If
the user needs something that the model cannot obtain, one may get incorrect results, such
ii. Errors i n the l i ve envi ronment: T he model may fail to work properly due to errors in
216
© 2014-2023 AnalystPrep.
the live environment as a result of poor data feeds or system changes.
iii. Model used i ncorrectl y: Models are developed with a particular need in mind. However, a
grows.
Models usually generate reports. In fact, this is the most visible result of a model. Ongoing
monitoring should continue throughout a model's life to enable tracking of known model limitations
and to identify new ones. Model outputs can as well be verified through the use of appropriate
benchmarks, which identify any rapid divergence. Discrepancies between the model output and
benchmarks provide a need for further investigation into the sources and degree of the differences.
T he note on supervisory guidance on model risk management, also referred to as SR 11-7, provides a
guideline for the whole process of model risk management for the banking sector. T he following are
some of the best practices banks should adhere to during model development and implementation.
A cl ear statement of purpose: T he model development and implementation process should start
with a clear statement of purpose so that it does not divert from its intended purpose. T here should
be clear documentation entailing the model design, theory, and logic supported by published research
and industry practice. All the merits and limitations of the various model techniques, mathematical
A careful assessment of data qual i ty and rel evance wi th appropri ate documentati on:
Data is essential in the development of a model. T here is a need to ensure that the quality of the data
obtained does not compromise the model's validity. Developers should explain the importance and
relevance of data in the model and how it will assist in achieving the model's intended purpose. All the
limitations of the data and information used concerning representation and assumptions should also be
explained.
Model testi ng: T his is an essential part of model development. It involves the evaluation of various
model components and their overall functionality and assessing whether they are performing as
intended. T he accuracy of a model, its potential limitations, the model's behavior over a range of
217
© 2014-2023 AnalystPrep.
input values, and the impact of assumptions (this helps know situations where the model might be
unreliable) are all assessed during the testing. Testing activities should also be appropriately
documented.
The sound devel opment of j udgmental and qual i tati ve aspects: A model may be adjusted to
modify the statistical outputs with judgmental and qualitative aspects. T hese adjustments should be
documented.
Cal cul ati ons of the model shoul d be appropri atel y coordi nated wi th the capabi l i ti es
and requi rements of i nformati on systems: Models are usually incorporated in larger
information systems that manage data flowing into a model from various sources and handle the
aggregation and reporting of model outcomes. All model calculations must be appropriately
coordinated with the capabilities and requirements of information systems. T he model risk
management team relies on substantial investment in supporting systems to ensure data and
reporting integrity. Further such investments enable the model risk management team to secure
controls and run tests to ensure proper implementation of models, effective systems integration, and
appropriate use.
Model validation involves a set of processes and activities intended to demonstrate that models are
performing as per the design objectives and business uses. Further, model validation identifies the
potential model limitations and assumptions. Finally, it assesses the possible impact of these
limitations. T he following are the critical elements of a robust model validation process.
T his entails the assessment of the quality of the model design, theory, and logic. In this process, the
model documentation and justifications for the methods used and variables selected for the model are
reviewed. Model documentation and testing help understand the limitations and assumptions of the
model. Documented evidence should support all model choices, for example, data, the overall
218
© 2014-2023 AnalystPrep.
theoretical construction, assumptions, and specific mathematical calculations.
Conversely, validation ensures an informed judgment in model design and development, which is
consistent with published research as well as industry practice. All model aspects are subjected to
critical analysis. T he analysis involves the evaluation of the quality and the extent of developmental
Ongoing Monitoring
Monitoring starts after a model is first implemented in production systems for an actual bank's use.
Monitoring should continue over time, depending on the availability of new data and the nature of the
model, among other factors. Ongoing monitoring ensures that the model is correctly implemented and
used appropriately. It also plays an essential part in evaluating product changes, exposures, clients,
or even market conditions and their implications in the current model. T his determines the need for
adjustment, redevelopment, or even replacement of the current model to fit those changes and
Banks need to design a program of ongoing testing and evaluation of the performance of a model. T his
should be done together with procedures that respond to any teething problems. T he program should
1. Process veri fi cati on: Process verification involves confirming that all the model
components are functioning appropriately as per the model design. It also validates the
accuracy of internal and external data inputs. In addition, process verification ensures
internal and external data are complete and consistent with the model's purpose and design.
2. Benchmark i ng: Benchmarking involves the comparison of a given set of a model's inputs
and outputs to estimates from an alternative set of internal or external data or models. T his
can be incorporated into model development and ongoing monitoring. Discrepancies between
the output of the model and that of the benchmark trigger two things. T hese include:
219
© 2014-2023 AnalystPrep.
Outcomes Analysis
T his involves a comparison of the output of a model to the corresponding actual outcomes. T he
comparison depends on the set objectives of the model, which may include an assessment of the
accuracy of estimates and an evaluation of rank-ordering ability, among others. T hese comparisons
form a tool for evaluating the model's performance. T hey do so by establishing the expected ranges
of the outcomes with respect to the intended objectives. Further, they assess the reasons for the
observed variation. If the outcomes analysis gives evidence of poor performance, the bank is
expected to take action to address the issues. Outcomes analysis relies on statistical tests or other
quantitative measures. Outcomes analysis is conducted on an ongoing basis to ensure that the model
continues to perform in line with design objectives and the bank's uses.
SR 11-7 provides guidelines for an effective model validation process that banks need to adhere to.
T he following are some of the challenges banks face if they adhere to those requirements.
Use of vendor or thi rd-party model s: According to SR 11-7, all models, either internally
developed or purchased, should be validated with the same rigor. However, there is a lack of vendor
transparency regarding intellectual property. T his may require banks to relax their rigor in the
validation process and just rely on benchmarking, and outcome analysis, among other methods.
Model documentati on: According to SR 11-7, model documentation should have enough details and
clarity in a way that any knowledgeable third party can use the documentation to recreate the model
without access to the model development code. However, some newer models, for example,
machine learning models, have a much more complex development process. T his makes the
documentation process more challenging. Even then, SR 11-7 recommends standardization of the
The veri fi cati on process: SR 11-7 requires banks to use models that the independent verification
team has approved. Some models are too complex, even for the verification team to understand.
T herefore, the verification team should have the necessary training to understand all the possible
220
© 2014-2023 AnalystPrep.
Expl ai nabi l i ty chal l enge: SR 11-7 requires the explainability of a model design as well as the
selection of variables. T his becomes challenging, especially when the model in question has some
complex neural networks. T hese models may end up being rejected due to lack of explainability even
Conceptual soundness: Assessing the conceptual soundness of a model entails assessing the
quality of the design and construction, review of the documentation, and confirming the soundness of
the selected variables. T his requires the validation practitioners to be familiar with all the possible
However, this is not entirely possible since new techniques always emerge. Similarly, new models
with higher complexities are invented day in and day out. T his makes it difficult for the validation
team to assess the fitness-for-purpose and the suitability of these models for the intended application.
221
© 2014-2023 AnalystPrep.
Practice Question
Which of the following gives a reason why a firm should invest in model risk
management?
Appropriate use.
development takes some time, and therefore, a firm should plan for it in good time to
losses resulting from the model will be the responsibility of an entire firm.
222
© 2014-2023 AnalystPrep.
Reading 114: Case Study: Model Risk and Model Validation
After compl eti ng thi s readi ng, you shoul d be abl e to:
Define a model and describe different ways financial institutions can become exposed to
model risk.
Describe the role of the model risk management function and explain best practices in the
Describe lessons learned from the three case studies involving model risk.
Model Definition
When model risk management was new, a model was defined as a tool used for forecasting based on
complex statistical techniques, known as quantitative models today. Given that these were new
techniques and had unknown risks, this definition, which was limited to statistical models, made sense
at that time. However, as model risk management evolves, the definition of a model has expanded to
According to the Fed, "the term model refers to a quantitative method, system, or approach that
process input data into quantitative estimates. T he definition of a model also covers quantitative
approaches whose inputs are partially or wholly qualitative or based on expert judgment, provided
Currently, the industry consensus is that a model is any estimation method based on data and a set of
assumptions that generates an uncertain estimate. Rather than focusing on the method or technique
223
© 2014-2023 AnalystPrep.
A firm can be exposed to two broad risks as a result of model risk:
1. Executi on ri sk : Occurs when a model fails to perform the purpose it's designed for.
2. Conceptual errors: Occur when the wrong assumptions or modeling techniques are used
in the model.
Execution errors mostly seem to be nothing serious to worry about. However, over time, such
insignificant errors coupled with bad luck can lead to significant material losses. Among the examples
covered in this reading are the acquisition of Lehman Brothers by Barclays and the NASA Mars
orbiter case studies. T hese errors may be due to coding errors, implementation errors, or the use of
wrong data. Tools that are not considered models can also have such errors.
Unlike execution errors, conception errors are not always based on a matter of right or wrong, and
therefore, they are difficult to identify. Different modelers may have different but valid opinions
about a model assumption. In such cases, model risk management should ensure transparency. To
ensure models are used appropriately, model users should be informed about the limitations of a
model. A model could be "right" in a particular context but "wrong" in another. Modelers often have
this knowledge and even discuss it in their documentation. However, it should not be assumed that
model users know and understand this. And thus, these assumptions should be precisely explained to
the model users. A good example is the CDO case study from the financial crisis of 2007–2009, which
T ypically, model risk management (MRM) comprises independent experts who are not involved in
model development. T he MRM function responsibilities cover all aspects of a model throughout its
lifecycle. T he MRM specifies model documentation standards, data quality expectations, and
versioning criteria. Most importantly, MRM is responsible for reviewing and challenging models to
minimize risks.
MRM functions assign models to different tiers based on the risk they pose to the firm to balance the
cost of model validation and the necessity to ensure the model risk is sufficiently addressed. When
224
© 2014-2023 AnalystPrep.
assessing model tiering, the materiality of the output is usually a factor to consider. T he model tier
T he validation team pays close attention to the models in the highest tier. T he team performs a
detailed review and comprehensive backtesting. Besides, it assesses the reliability of the model's
output. High-tier models are also validated more frequently, usually every two or three years, while
Regardless of the tier, all models undergo an annual review of the environment, data, and other
important elements. T hese annual reviews help confirm that no material changes have occurred
since the last full-scope validation. T he model can continue to be used if no changes are observed
Apart from these validations and reviews, the MRM functions monitor the models' performance
through monitoring reports produced by model owners. Reports are produced at varying intervals
depending on the frequency at which models are used. Different models attract different monitoring
frequencies.
Model risk management should be perceived as a continuous process rather than a point-in-time
validation and review exercise. Many firms focus on periodic rather than continuous validation.
However, periodic validation is more manageable and predictable, and therefore staffing needs are
easily predictable. Periodic validation allows banks to operate much smaller validation teams because
validators can move to the next scheduled task once done with the task at hand.
T he point-in-time model was widely adopted during the early days of model risk management.
Regulatory models with long development and deployment cycles influenced this move.
As the reliance on models increases, and the environments in which models are deployed become
more dynamic, MRM functions should adopt a more continuous risk management approach.
T here is a risk that modeling teams may become complacent with regard to risk management
practices due to the presence of large and competent validation teams. T he second line is intended to
serve as an independent backstop in case the first line fails to catch errors. However, its existence
225
© 2014-2023 AnalystPrep.
should not result in the first line abdicating its own responsibilities. In the context of model risk,
model developers and model owners form the first line of defense. Hence, they generate the risk to
which the organization is exposed. Consequently, the first line owns the risk and should take all
necessary steps to mitigate it. On the other hand, the second line should independently assess the
T he third line of defense provides independent assurance and validation of the effectiveness of the
first and second lines of defense. T he third line includes internal audit or other independent
assurance functions. T he third line is responsible for evaluating the effectiveness of model risk
management practices and providing assurance to senior management and the board of directors that
In more mature institutions, the modeling teams typically form a quality control/quality assurance
team. First-line QA/QC teams play a pivotal role in mitigating model risk, especially execution risk.
T his case study focuses on the collapse of CDO markets in 2008. In the early 2000s, David X. Li
published a paper on pricing CDOs and how to price pools of assets without considering their
correlations. Li's approach was based on the Gaussian copula and the use of CDS prices to infer the
correlation of assets.
Li's model applied CDS prices rather than observed historical correlation to price CDOs. When
people started using the pricing formula in the early 2000s, CDSs had been around for only about a
decade. As a result, the sample was relatively benign—housing prices rose consistently, and defaults
were at an all-time low. Correlations implied by CDS prices in this environment were very low and
extremely sensitive to the trajectory of house prices. When house prices reversed course,
correlations implied by CDS prices shot up with dramatic consequences to CDO prices.
Li's pricing model was widely adopted, despite these limitations. When signs of weaknesses started to
materialize in 2008, the correlation between CDSs and CDO prices increased dramatically, leading to
226
© 2014-2023 AnalystPrep.
the collapse of the CDO market.
T he blame falls not on the formula but on those who applied it blindly, i.e., banks, which should have
warned users, and users who should have tried to understand the formula better before adopting it.
Banks were also to blame because they did not update the copula model with the new correlation
estimates implied by the higher CDC prices. Instead, they continued pricing CDOs based on old
assumptions.
In this context, the role of risk management is to ensure transparency by creating awareness or
informing market users on issues relating to new models, etc. MRM should assess the model to
ensure we don't have coding errors and that it produces the prices as intended. Most importantly,
MRM should challenge the assumptions and ensure users understand related limitations. Effective
communication is key for MRM to handle this challenge successfully. Most model users at banks do
not have a quantitative background and end up depending on assumptions they don't quite understand
and assume that it's the modelers' responsibility to ensure the model works accurately. A good MRM
should help minimize the misuse of models by helping users understand the limitations accompanying
a model.
In September 2008, Lehman Brothers collapsed, sparking the 2008 global financial crisis. In one
incident not known to many, Barclays Capital almost bought 179 trading contracts from Lehman
Brothers by accident.
Lehman Brothers filed for bankruptcy on September 15, 2008. On September 18, 2008, Barclays
Capital offered to acquire a portion of the assets of the US bank, including some of Lehman's trading
positions. Barclays hired the Cleary Gottlieb Steen & Hamilton law firm to represent Barclays. T he
law firm was to submit the purchase offer to the US Bankruptcy Court for the Southern District of
New York's website by midnight on September 18. A few hours before the deadline (at 7.50 pm),
Cleary Gottlieb received an Excel file from Barclays containing information on assets they wished to
acquire. T he spreadsheet file had 1000 rows and 2400 cells, including those listing the 179 trading
contracts that Barclays did not want to buy. T hey were, however, hidden rather than deleted.
227
© 2014-2023 AnalystPrep.
A junior law associate was asked to convert the Excel files into a PDF for uploading on the court's
website. Unaware of the hidden rows coupled with the tighter schedule, he directly converted the
T he mistake was later identified in the PDF files after the contract had already been approved.
Cleary, Gottlieb had to file a legal motion to exclude those contracts from the deal.
T he Excel error here is a good example of an implementation error. T he Excel spreadsheet is just a
tool that does not qualify as a model; there is no uncertainty about the information contained in the
spreadsheet since the list of assets and their values are also known. Yet, a simple mistake – forgetting
to delete the hidden rows almost cost Barclays millions of dollars. Even though the loss did not
materialize in this case, it could materialize in some other cases. T hus, even tools and models that
T he Lockheed Martin engineering team used English units of measurement, while the agency's team
used metric units for a key spacecraft operation, which cost NASA $125 million. It is hard to imagine
that an error as simple as inconsistent units could have resulted in the destruction of a multimillion-
dollar satellite. T his is an error related to model assumptions and, more specifically, the choice of
units. In the financial world, errors similar to this could be the use of the wrong currency, discount
factor, or other simple assumptions that are often taken for granted.
T he development of a robust MRM function that thoroughly reviews all models and double-checks all
work is often perceived by executives to be costly. Some mistakes are perceived as benign, and
reviewing models just to catch them is a waste of resources. However, in most cases, these benign
mistakes would have resulted in benign losses, if any. It can also be agreed that a small subset of
these mistakes could result in catastrophic losses, such as the loss of the Mars orbiter.
Unfortunately, it is not possible to distinguish between mistakes that might cause catastrophic losses
228
© 2014-2023 AnalystPrep.
Practice Question
In September 2008, Lehman Brothers collapsed, sparking the 2008 global financial crisis.
In one incident not known to many, Barclays Capital almost bought 179 trading contracts
from Lehman Brothers by accident. Which of the following lessons can be learned from
this incident?
A. Model risk management should challenge the assumptions and ensure users
B. Even tools and models that seem so simple should be challenged and reviewed
properly.
C. Even small errors, such as the use of wrong units, can lead to massive losses.
D. A good MRM should help minimize the misuse of models by helping users
T he correct answer is B.
A simple mistake – forgetting to delete the hidden rows almost costed Barclays millions of
dollars. Even though the loss did not materialize in this case, it could materialize in some
other cases. T hus, even tools and models that seem so simple should be challenged and
reviewed properly.
A i s i ncorrect. T his is a lesson associated with the collapse of CDO markets in 2008,
C i s i ncorrect. T his lesson relates to the NASA Mars Orbiter incident, where the use
D i s i ncorrect. T his lesson is drawn from the collapse of CDO markets in 2008, in
229
© 2014-2023 AnalystPrep.
Reading 115: Stress Testing Banks
After compl eti ng thi s readi ng, you shoul d be abl e to:
Describe the historical evolution of the stress testing process and compare methodologies
Explain challenges in designing stress test scenarios, including the problem of coherence in
Explain challenges in modeling a bank's revenues, losses, and balance sheet over a stress
T hese are simulation exercises implemented to ascertain the resilience of a single bank, or the
entire banking sector, to adverse unanticipated occurrences. Solvency tests help to assess banks’
capital planning and capital adequacy, thereby reducing the likelihood of failure. Besides, they help to
assess a bank’s ability to balance cash inflows and outflows in a stress scenario.
230
© 2014-2023 AnalystPrep.
Historical Evolution of Stress Testing Process
Stress tests took center stage following the 2007-2008 financial crisis. It was a development deemed
to be a means of assessing the ability of financial institutions to withstand adverse events. T he idea
was to identify and report a bank's capital sufficiency to evade inherent failures. Stress tests have
since become entrenched tools used to gauge the banking sector's resilience. T he emphasis on
stress tests to assess and replenish bank solvency was clarified by the fact that capital defines the
Until the Great Financial Crisis, banks were limited to following the Internal Rating-based Approach
231
© 2014-2023 AnalystPrep.
for Capital Requirements for Credit Risk under Basel II. T hey were required to stress test their
internal rating models under different scenarios, including market risk and liquidity conditions, among
others.
T he Basel Committee on Banking Supervision (BCBS) released a publication in May 2009 describing
why stress testing failed during the great financial crisis. It addressed the following issues:
i. Scenari os consi dered: Minor severity and missing correlations between scenarios
affected results since they could not comprehensively present the aggregate risks across a
bank. Scenarios were undertaken at a business level and were unrelated to capital adequacy
and liquidity.
ii. Stress testi ng of speci fi c ri sk s and products: New complex products or strategies,
such as complex hedging strategies, were not covered under credit risk, liquidity, and
contingent risk. Furthermore, funding and reputational constraints were not tested.
iii. Stress testi ng approaches: Several risk management tools employed historical statistical
relationships in the assessment of risks. Similarly, the banking sector lacked a firm-wide
approach and focused a lot on models calibrated on historical data. T he use of historical
information revealed that the method did not consider future risk exposures.
iv. Use of stress testi ng and i ntegrati on i n ri sk governance: Stress tests were not
included in a global risk framework as other businesses doubted the credibility of the
analysis. Senior management was not involved enough, implying the non-existence of a
v. Furthermore, the financial crisis demonstrated extreme damage to the economy when banks
T he May 2009 Basel Committee on Banking Supervision (BCBS) publication on the implementation of
stress testing principles led to the formation of the Committee of European Banking Supervisors
(CEBS) – which eventually became the European Banking Authority (EBA) in 2010.
EBA was developed to set a date for the implementation of these principles at a local institutional
232
© 2014-2023 AnalystPrep.
level. T he EBA conducts regular stress tests to maintain financial stability in the EU's banking
In response to the global financial crisis, the United States Federal Reserve's Supervisory Capital
Assessment Program (SCAP) assessed whether, in spite of the crisis, the largest domestic banks had
adequate capital resources to absorb losses and continue operating optimally. T his marked the first
macro-prudential stress test after the crisis. Macro-prudential stress testing involves analyzing the
soundness of the banking system as a whole. On the other hand, in micro-prudential exercises, the
authorities use stress test results as part of the supervisory review to assess the strategies,
T his stress test was key in restoring confidence in the financial sector by publicly disclosing bank
233
© 2014-2023 AnalystPrep.
balance sheets in terms of the true value of structured products, thereby bringing transparency.
US$75 billion of additional capital was raised. SCAP successfully reassured the markets.
Pre-SCAP Post-SCAP
Mostly single shock Broad macro scenarios and market stress
Carried out at the product or business unit level Comprehensive, firm-wide
Static Dynamic and path dependent
Isolated and not tied to capital adequacy Explicit post-stress common equity threshold
T he Federal Reserve System initiated the Comprehensive Capital Analysis and Review (CCAR) for
the largest banks. T his started the next generation of comprehensive regulatory stress tests.
CCAR's primary objective is to ensure that a repeat of the 2008 financial crisis is avoided. It is
designed to do this by giving regulators better and more advanced visibility of stress testing results of
CCAR is a forward-looking operation that gives a detailed view of capital and risk for two years in the
future. Supervisors define macroeconomic scenarios that are used to determine the profit and loss
for each quarter. It is key to note that failing banks are not authorized to carry out share buybacks or
pay dividends. T herefore, CCAR is strictly adhered to by the banking industry. T he following diagram
234
© 2014-2023 AnalystPrep.
Challenges in Designing Stress Test Scenarios
Coherence is one of the challenges of designing useful stress tests. Coherence means that besides
the scenarios and sensitivities being extreme, they must be reasonable or possible. However,
developing a coherent stress test is difficult because of the reasons discussed below.
Dynamic risk factors change throughout their lives. T his makes developing a coherent stress test
difficult since all risk factors move simultaneously. For example, not all currencies can depreciate at
235
© 2014-2023 AnalystPrep.
once; some have to appreciate.
It is insufficient to highlight one risk factor because other risk factors also change. For example, high
unemployment coincides with declining equity prices. T his makes it challenging to design a coherent
stress test.
T he problem of coherence is more acute when taking stress scenarios for risks for marked-market
portfolios into account. Value at Risk (VaR) is used to manage the risk of traded securities and
derivatives portfolios. T he many positions in the trading book must be mapped to tens of thousands
of risk factors tracked daily. T he resulting data is used to estimate parameters such as volatility and
T he supervisor deals with the task of specifying a joint outcome that is coherent for all risk factors.
Safe-haven assets must be possessed by every market shock scenario that causes flights from risky
assets. T his challenge is compounded by the complicated task of finding a jointly coherent real and
Historical Scenarios
T he 2009 SCAP tested simple scenarios with GDP growth, unemployment, and the house price index
(HPI) as the state space dimensions. T he market risk scenario was based on historical experience.
Losses
236
© 2014-2023 AnalystPrep.
T he following are challenges in modeling a bank’s losses:
i. Transl ati ng macro ri sk factors empl oyed i n stress testi ng to mi cro: Most
emphasis has been placed on monitoring the effects of macroeconomic shocks, yet micro-
financial forces could even pose greater threats to financial stability. T he span of such
threats is larger and more difficult to identify and model than macroeconomic risks.
ii. Geographi cal di fferences: T he EBA's objective was to solve the challenge of geographic
unemployment rates nationally and within states, it is evident that geographical factors
iii. Busi ness cycl e: Different industries are affected by the business cycle at different times.
For example, if the airline industry is distressed, and a bank is stuck with the collateral on
defaulted aircraft leases, it is nearly impossible to sell the aircraft except at exceedingly
depressed prices. While all that is happening, the healthcare sector may be doing relatively
T hese factors make it difficult to map broader macro-factors to bank-specific stress results.
Revenues
can be less developed over a stress test horizon period relative to modeling losses. T he
techniques for obtaining revenues under adverse conditions have been under-researched.
ii. Effect of fl uctuati ng i nterest rates: A bank's revenues are classified into interest
income and non-interest income. It is difficult to model interest income because of the
variability associated with interest rates. T he unanticipated swings in rates may impair the
bank's profitability. It is even more difficult to model non-interest income such as trading
iii. The compl exi ty of the effects of scenari o anal ysi s on i nterest-free i ncomes: T he
impacts on non-interest revenues are difficult to assess since less has been reported about
their determinants.
237
© 2014-2023 AnalystPrep.
i. Depl eti on of capi tal : Actions such as acquisitions, and changes made to dividend payments,
depending on whether shares will be repurchased or issued, can wipe out capital and hence
make it difficult to model the balance sheet over the stress horizon.
ii. Rai si ng capi tal : Banks might be obligated to raise capital as per a particular plan without
considering its efficiency. T herefore, income, assets, and liability statements have to be
iii. Nature and si ze of new assets: Stress testing approaches yield partial equilibrium results
underestimates the severity of the financial crisis. T herefore, the modeler must keenly
consider all the cash inflows and expenditures to provide accurate and reliable reflections of
the bank.
238
© 2014-2023 AnalystPrep.
Practice Question
T he Supervisory Capital Assessment Program (SCAP) took place in the spring of 2009
during the financial crisis of 2008-2009. It was intended to measure the financial strength
of the nation's 19 largest financial institutions going forward. SCAP being the first macro-
prudential stress test has some features that distinguish it. Which of the differences
listed below is NOT a feature that correctly describes the differences between pre-
A. Post-SCAP applies to a broad macro scenario and market stress while pre-SCAP is
C. Post-SCAP applies to losses, revenues, and costs whereas pre-SCAP applies to losses
only.
T he correct answer is B.
T he following are the correct differences between pre-and post-SCAP stress testing.
1. Pre-SCAP is mostly single shock while post-SCAP involves a broad macro scenario and
market stress.
4. While pre-SCAP is not usually tied to capital adequacy, pre-SCAP is an explicit post-
239
© 2014-2023 AnalystPrep.
stress common equity threshold.
5. Pre-SCAP applies to losses only whereas post-SCAP is for losses, revenues, and costs.
240
© 2014-2023 AnalystPrep.
Reading 116: Risk Capital Attribution and Risk-Adjusted Performance
Measurement
After compl eti ng thi s readi ng, you shoul d be abl e to:
Define, compare, and contrast risk capital, economic capital, and regulatory capital, and
explain methods and motivations for using economic capital approaches to allocate risk
capital.
Describe the RAROC (risk-adjusted return on capital) methodology and its use in capital
budgeting.
Compute and interpret the RAROC for a project, loan, or loan portfolio and use RAROC to
Explain challenges that arise when using RAROC for performance measurement, including
choosing a time horizon, measuring default probability, and choosing a confidence level.
Calculate the hurdle rate and apply this rate in making business decisions using RAROC.
Explain best practices in implementing an approach that uses RAROC to allocate economic
capital.
Risk capital acts as a cushion against the various risks a business takes. It takes the full brunt of the
effects of taking risks, including misjudgments, adverse results, and outright losses. It serves two
main purposes:
i. It helps businesses retain their financial integrity and status as a going concern even in the
241
© 2014-2023 AnalystPrep.
ii. It instills confidence among stakeholders such as lenders, suppliers, customers, and
supervisory/regulatory authorities.
Risk capital is particularly important in the banking sector. Lenders who rely on customer deposits
to fund loans and other investments must project financial integrity and financial stability to instill
confidence among depositors. Even the slightest of rumors about financial turmoil can trigger a run,
In banking, risk capital is usually called economic capital, but as we shall see later in the chapter, risk
Economi c capi tal is an institution’s own capital estimate of the amount it needs to remain solvent
and maintain its day-to-day operations. Regul atory capi tal is the minimum amount of capital an
Every organization has to contend with economic capital. Regulatory capital, however, only comes
standard capital requirements that all participants in the industry must comply with. In this regard,
banks and insurance companies are subject to regulatory capital in all geographical jurisdictions
around the world. It is important to note that regulatory capital specifies the minimum required level
of capital adequacy and, therefore, may not capture the true level of risk in a firm.
242
© 2014-2023 AnalystPrep.
Comparisons With VaR
A firm cannot offer its stakeholders a 100% guarantee that it holds enough risk capital to cushion
itself from the entire universe of eventualities. What happens is that risk capital is calculated at a
level of confidence less than 100%. If the one-year risk capital calculated at 99.9% confidence is Y, it
means that over the next year, there’s only a 0.1% chance that actual losses will exceed Y. Firms
settle upon a given level of confidence depending on the credit rating targeted from rating agencies
such as Moody’s and Standard and Poors. T he higher the desired credit rating, the higher the
T raditionally, risk capital has been used as an indicator of the amount of capital a firm requires to
243
© 2014-2023 AnalystPrep.
remain solvent, given its portfolio of risky investments. In recent times, however, more uses have
a. Eval uati ng performance and i ncenti ve compensati on at the fi rm l evel , busi ness
uni t l evel as wel l as i ndi vi dual l evel : T his is achieved by using performance
indicators, such as RAROC (risk-adjusted return on capital), that take risk capital into
account.
b. Acti ve portfol i o management for entry/exi t deci si ons: Any decision to enter or exit a
Whenever a firm is faced with an entry/exit decision, it should consider how either decision
c. Pri ci ng transacti ons: Risk capital numbers can be used to come up with risk-adjusted
prices of transactions to ensure that a firm is adequately compensated for taking risks.
Fi nanci al i nsti tuti ons use capi tal not j ust as a source of fundi ng for thei r i nvestments
large amounts of leverage at low costs. T hey can, for example, do so by simply accepting customer
deposits or issuing a bond. It is not unusual to find a bank with a debt-to-equity ratio of 20 or more. It
is noteworthy, however, that nonfinancial institutions rarely venture above 2. On the same note,
financial institutions engage in a wide range of transactions, such as derivatives trading, writing
guarantees, and FX trading, that do not require significant financing. But all of these activities draw
on an institution’s risk capital. T herefore, a risk capital cost must be included in each activity.
Fi nanci al i nsti tuti ons have to demonstrate credi tworthi ness. For financial institutions,
their customers double up as their creditors. T hat means customers are always monitoring the
financial health of their bank. Customers make their deposits with the expectation that the safety of
their deposits is not hinged on the bank’s economic performance in any way. Any signs of trouble can
trigger a run, leaving a bank short of funds. For this reason, the bank is incentivized to maintain
244
© 2014-2023 AnalystPrep.
Bank s are tradi ti onal l y opaque i nsti tuti ons wi th compl ex book s. T hey engage in complex
transactions that often use proprietary technology that ordinary stakeholders cannot scrutinize. In
addition, the balance sheet position of a bank can change pretty fast, with liquidity being wiped out or
even restored in a matter of hours or days. T hat means accounting analysis becomes obsolete quite
fast. For these reasons, banks are incentivized to maintain adequate risk capital and implement robust
risk management frameworks to reduce agency costs. Such initiatives will convince all stakeholders
that banks' financial integrity is beyond reproach and that there’s no cause for concern.
A bank ’s profi tabi l i ty i s hi ghl y sensi ti ve to i ts cost of capi tal . On the one hand, a bank
cannot hold too much capital since that stifles profitability, considering that a bank can only invest
risk capital in safe, low-yield assets. On the other hand, a bank cannot afford to hold risk capital that’s
too little since this raises the specter of insolvency. As such, it is important that a bank strikes a
balance between holding sufficient economic capital and the uptake of risky but positive NPV
projects.
Risk-adjusted return on capital (RAROC) is a metric used to determine the return on investment,
taking full cognizance of risk elements. It expresses the expected after-tax profit as a percentage of
economic capital.
RAROC offers a uniform and comparable measure of risk-adjusted performance across all business
units. A business unit is deemed to add value to shareholders only if its RAROC is higher than the cost
of equity. T he cost of equity can be calculated according to the capital asset pricing model.
1. It helps banks to single out deals that may appear highly profitable on paper but actually have
more than a commensurate capi tal requi rement. In other words, it exposes
245
© 2014-2023 AnalystPrep.
seemingly profitable but excessively risky deals. In the same vein, RAROC helps draw
attention to deals that offer a high risk-adjusted return even though they have a deceptively
3. It takes economi c profi ts rather than accounti ng profi ts which include historical,
4. It is exhausti ve and fl exi bl e: Exhaustive in the sense that it considers all financial
variables such as sales, expenses, and capital. It is flexible because it can be computed at the
firm level or refined to the level of individual business lines, branches, or deals.
Where:
Expected revenues are the revenues that the activity is expected to generate
(assuming no losses).
Costs are the direct expenses associated with running the activity (e.g., salaries,
Expected l osses (EL) consist mainly of expected default losses (i.e., loan loss
reserve). Expected losses also include the expected loss from other risks, such
Taxes are the expected amount of taxes imputed to the activity using a
246
© 2014-2023 AnalystPrep.
(government bonds) based on the amount of allocated risk capital.
transactions between a business unit and the central treasury group, e.g., funding
costs and overhead allocations. It also includes the cost of hedging interest rates
Economi c capi tal is the sum of risk capital and strategic risk capital. Strategic
risk capital refers to the risk associated with large, potentially highly profitable
value (fair value) of the net assets recorded on the balance sheet. It
impairment.
Burned-out capi tal represents the risk of funds spent during the
start-up phase of a venture that may be lost in case the projected risk-
adjusted returns are deemed too low, forcing a firm to abandon the
failure decreases.
Some banks also allocate risk capital for unused risk limits (undrawn amounts on lines of credit)
because there’s always a chance that the extra risk capacity will be tapped at some point. If a bank
taps the extra risk capacity, it would have to adjust its risk capital upwards.
Important: We must bear in mind that economic capital is a cushion against unexpected losses at a
247
© 2014-2023 AnalystPrep.
given level of confidence. T he difference between the worst-case loss and the expected loss gives
the unexpected loss. For example, let’s assume that in a given transaction, the expected loss and the
worst-case loss at 99% confidence are 50 basis points and 150 basis points, respectively. In this case,
the unexpected loss is 100 basis points over the relevant horizon, usually one year. T he unexpected
A bank in the U.S. is considering building a portfolio of corporate loans with the following
characteristics:
T he portfolio is funded by $10 billion of retail deposits that attract an interest charge of
4%.
Chief Risk Officer projects an expected loss of 1% and a worst-case loss of 7% of the
T ransfers: $0.
Solution
248
© 2014-2023 AnalystPrep.
Expected revenues − Costs-Expected losses-Taxes +
Return on risk capital+/- T ransfers
RAROC =
Economic capital
Costs = operating cost + interest charge = (1% + 4%)$10 billion = $500 million.
Economic capital = unexpected loss = worst case loss – expected loss = (7% - 1%)$10 billion =
$600 million.
T herefore,
Interpretati on: T he annual after-tax expected rate of return on equity needed to support this
portfolio is 13.77%.
Note: Since the analysis is being performed ex-ante, we should use expected revenues and losses in
the numerator. We would have to use realized revenues and losses to analyze portfolio performance
on an ex-post basis.
RAROC Horizon
When calculating RAROC, practitioners usually adopt a one-year time horizon since this aligns with
the business planning cycle and also presents a reasonable approximation of the amount of time that
would be needed to recapitalize if a firm were to suffer a major unexpected loss. However, looking
at multi-period RAROC spanning as many as five years is also possible. T his would help a firm to
Notably, calculating economic capital over a longer horizon may not result in a significant increase in
capital because the confidence level in any firm’s solvency decreases as the time horizon increases.
249
© 2014-2023 AnalystPrep.
In the same vein, the quality and accuracy of risk and return data decline as we peer further into the
future.
Risk capital can also be viewed as the one-year VaR at a confidence level that reflects a firm’s target
credit risk rating. For credit risk measured using models such as the CreditMetrics and KMV that
assume a one-year horizon, the one-year VaR is equivalent to risk capital. Similarly, the one-year VaR
with respect to operational risk is equivalent to the one-year risk capital. But when it comes to
Due to the nature of markets to move significantly over short-term periods, market risk is measured
using only short-term horizons – one day for risk monitoring, and 10 days for regulatory capital. As
such, we have to adjust the one-day or ten-day VaR into an annual figure so as to determine the one-
year capital allocation. T he most common approach used to scale up the VaR to an annual value
For purposes of risk capital determination, the use of the square root of time rule needs to be fine-
1. Risk capital is meant to help a firm overcome a period of stress occasioned by a major loss.
Even in a worst-case scenario, a firm might only be able to reduce its risk to a core risk
level that guarantees it retains its status as a financially viable business for the rest of the
year.
2. T here’s a need to factor in the time needed to scale down the current risk level to the core
risk level. A firm will need some time to liquidate its investment positions accordingly.
A bank’s core risk level is below the current risk position. T he following information is provided:
250
© 2014-2023 AnalystPrep.
Days needed to reduce current risk level to core risk level = 10 (i.e., a reduction of 2 VaR
per day).
Assuming that there are 252 business days in a year, compute the required risk capital as a
Solution
Risk capital =
{sum of squares+
√core risk level squared(business days per year-days needed to reduce current to core)}
= √1002 + 982 + 962 + 942 + 922 + 902 + 882 + 862 + 842 + 822 + 802 × (252 − 10)
= √83, 140 + 1, 548, 800 = 1,277.4741
Now,
One-year VaR = one-day VaR ∗ square root (number of business days in a year)
= 100 × √252 = 1, 587.4508
T herefore, the required risk capital is 80.5% of the annualized VaR (= 1,277.4741/1587.4508).
251
© 2014-2023 AnalystPrep.
T here are many more banking activities that must be allocated capital in a way that is sensitive to
time horizons. In particular, a bank should allocate capital to cover the risk of options embedded in its
products. A good example would be capital set aside to cover the option to prepay a mortgage.
Confidence Interval
As mentioned earlier, a firm’s economic capital calculation should be consistent with its target credit
rating. In practice, banks often target an AA rating for their debt offerings, translating to a probability
of default of 3 to 5 basis points. T his, in turn, corresponds to a level of confidence in the range of
T he choice of confidence level has a material impact, not just on risk-adjusted performance
measures but also on the amount of capital allocated to an activity. T he higher the confidence level,
252
© 2014-2023 AnalystPrep.
the higher the amount of risk capital allocation, and vice versa.
Default Probability
T here are two stylized approaches used to model the probability of default: a point-in-time approach
1. A poi nt-i n-ti me (PIT ) probability of default (PD) assesses the likelihood of default at a
particular point in time. As it assesses risk at a point in time, a firm will move up or down
default rate for a particular firm over an economic cycle. It ignores short-run changes to a
firm’s PD and thus results in a stable rating grade through the economic cycle.
Compared to the point-in-time approach, the through-the-cycle approach reduces the volatility of
253
© 2014-2023 AnalystPrep.
economic capital. Firms usually compare the impact of using the point-in-time probability of default
versus the through-the-cycle probability of default in the RAROC calculation for both a normal part
When a firm is considering venturing into a business or closing down one altogether, it computes the
after-tax RAROC. T he firm then compares the after-tax RAROC to its (firm’s) hurdle rate. A hurdle
rate is the minimum required rate of return or target rate that investors are expecting to receive on
an investment.
If the RAROC ratio is greater than the hurdle rate, the activity may be pursued because it
If the RAROC ratio is less than the hurdle rate, the activity is rej ected or termi nated
T he hurdle rate is computed as the after-tax wei ghted average of the cost of equity capital:
CE × rCE + PE × rPE
hAT =
CE + PE
Where:
CE and PE denote the market value of common equity and preferred equity, respectively.
rCE and rPE are the cost of common equity and preferred equity, respectively.
T he cost of preferred equity is taken to be the yield on a firm’s preferred shares. T he cost of
R CE = R F + βCE (R M − R F )
254
© 2014-2023 AnalystPrep.
where:
255
© 2014-2023 AnalystPrep.
Beta of common equity 1.2
Risk-free rate 5.0%
Expected market return 12.00%
Expected return on common equity = 5.0 + 1.2 (12.0– 5.0) = 13.4%
T he use of a hurdle rate comes with a challenge: T he firm may end up accepti ng hi gh-ri sk
proj ects that will lower its value. At the same time, it might rej ect l ow-ri sk proj ects that will
Adjusted RAROC
T he adjusted RAROC (ARAROC) adjusts the traditional RAROC to take the systemic riskiness of
returns into account and use a hurdle rate that remains the same across all business lines.
Where:
R M − R F = Excess return over risk-free rate to account for the non-diversifiable systematic risk of
the activity.
If the adjusted RAROC is greater than the risk-free rate, accept the project.
256
© 2014-2023 AnalystPrep.
If the adjusted RAROC is less than the risk-free rate, reject the project.
RAROC 10.0%
Risk-free rate 5.0%
Expected return on market portfolio 12.0%
firm’s equity beta 0.8
In order to determine the risk capital for a particular business unit within a larger firm, each unit is
typically viewed on a stand-alone basis. However, finance theory tells us that there’s bound to be
some diversification benefits when all the business units are viewed together. T hat’s because the
correlation of returns is likely to be less than +1. As such, the risk capital for the firm should be
significantly less than the sum of the stand-alone risk capital of individual business units.
Assume that the complete array of risks facing a firm is as follows (all figures in millions):
257
© 2014-2023 AnalystPrep.
Risk capital is simply the sum of the four risk capital amounts:
Risk capital is given as the square root of the sum of squares of the four risk capital amounts:
As this example shows, the aggregate risk capital for the firm can be as high as $1,800 or as low as
$927 – quite a wide range. Given this result, we would expect that any risk capital calculation
process that takes diversification effects should yield an overall VaR figure in the range of 927 to
$1,800.
Although, undoubtedly, there’s a diversification effect, measuring it is quite an uphill task. In fact, we
do not have a model that can estimate the correlation between market risk, credit risk, and
operational risk across all the business units with good enough accuracy. One way firms try to
overcome this problem is by lowering the confidence level when measuring each individual risk so as
Measuring the exact amount of the diversification benefit is not the only problem. Firms have to find
a way to allocate the aggregate diversification benefit to specific business units. Allocating the
diversification benefit can help in decision-making, e.g., when deciding which business line to
It is logical to allocate more risk capital to a business unit whose earnings correlate highly to the
overall firm. But firms struggle to come up with a precise method of doing this, and most settle for a
Assume that we have a business unit that engages exclusively in two activities – X and Y.
258
© 2014-2023 AnalystPrep.
Further, assume that:
If we ignore any diversification benefits, activities X and Y will have stand-alone capital of $80 and
$70, respectively. In addition, the stand-alone capital for the business unit is $110.
We can attribute this to each of the two activities on a pro-rata basis as follows:
T herefore, X and Y will have a fully diversified capital of $58.67 (= 80 – 21.33) and $51.33 (= 70 –
18.67).
Marginal capital is the additional capital an incremental deal, activity, or business requires. It takes
Marginal risk capital for X (assuming that Y already exists) is $40 (= $110 total - $70 for Y).
Marginal risk capital for Y (assuming that X already exists) is $30 (= $110 total - $80 for X).
Generally, the marginal risk capital of a business unit X is calculated as the risk capital required for
the unit minus the risk capital required for the full portfolio of businesses.
259
© 2014-2023 AnalystPrep.
Important Points:
Ful l y di versi fi ed measures should be used for assessi ng the sol vency of the firm
Decisions to do with acti ve portfol i o management or the appropri ate busi ness mi x
should be based on margi nal ri sk capi tal , taking into account the benefit of full
diversification.
risk capital for incentive compensation and fully diversified risk capital to assess the extra
T he issue of diversification and its effect on risk capital needs to be approached with a certain level
of caution because correlations can change dramatically over time. For example, during a financial
crisis or severe period of stress, correlations tend to shift toward 1 or -1. T his reduces or even
T here are a number of recommendations for firms that intend to implement a RAROC approach to
Let the senior management take charge of the entire process, emphasizing the importance
of making sure that the firm is adequately compensated for every risk taken.
Firm-wide education to ensure that everyone understands what RaROC is all about and the
Regular review of key parameters by a review group to ensure that they reflect current
trends.
Data quality should be safeguarded to maintain the integrity of the process and results.
260
© 2014-2023 AnalystPrep.
A firm should combine RAROC with qualitative considerations such as the strategic
T here should be active capital management, where capital demands and requests are well
261
© 2014-2023 AnalystPrep.
Practice Question
A. 0.2627.
B. 0.8356.
C. 0.9726.
D. 1.0548.
T he correct answer is B.
Recall that:
T herefore:
75 − 8 − 22 − 5 + 21
RAROC =
73
= 0.8356
262
© 2014-2023 AnalystPrep.
263
© 2014-2023 AnalystPrep.
Reading 117: Range of Practices and Issues in Economic Capital
Frameworks
After compl eti ng thi s readi ng, you shoul d be abl e to:
Within the economic capital implementation framework, describe the challenges that
appear in:
Risk aggregation.
Validation of models.
Describe the BIS recommendations that supervisors should consider to effectively use
internal risk measures, such as economic capital, that are not designed for regulatory
purposes.
Explain the benefits and impacts of using an economic capital framework within the
following areas:
Risk-based pricing.
Management incentives.
Describe best practices and assess key concerns for the governance of an economic
capital framework.
264
© 2014-2023 AnalystPrep.
Economic capital refers to the methods and practices banks employ to consistently assess risk and
attribute capital to cover the economic effects of risk-taking activities. Initially, economic capital
was a tool used exclusively for capital allocation and performance measurement. For this purpose,
accuracy is not paramount, and less importance is attached to the measurement of the overall risk
level. In recent years, however, new applications that attach much more importance to accuracy
have emerged. For example, the quantification of the amount of internal capital needed by a bank is
an especially important endeavor that has to be accurate because it has implications for a firm’s
performance.
Firms can analyze economic capital at various levels— from firm level to risk-type or business-line
issues related to the use and governance of economic capital, the choice of risk measures,
aggregation of risk, and validation of economic capital. In addition, it covers three important building
blocks of economic capital (dependency modeling in credit risk, counterparty credit risk, and
265
© 2014-2023 AnalystPrep.
We will now look at the challenges experienced by firms
while implementing economic capital in relation to each of
the above building blocks.
Banks have a variety of risk measures to choose from. However, the risk measure chosen depends
266
© 2014-2023 AnalystPrep.
T radeoffs between complexity and usability.
T he intended use.
Availability of data.
T he most common risk measures considered include value at risk (VaR), expected shortfall (ES),
standard deviation (σ), and spectral risk measures. However, there’s no cl ear preference for any
one of these over another because each type of risk measure comes with its own set of strengths
and weaknesses.
T he following are the main challenges encountered with respect to each risk measure.
It does not describe the losses in the left tail. It indicates the probability of a value
occurring but stops short of descri bi ng the di stri buti on of l osses i n the l eft tai l .
It is not stabl e because the underlying set of assumptions heavi l y i nfl uences it.
Despite ticking all the boxes on matters of coherence, it fai l s on el i ci tabi l i ty, i.e., the
It’s not easy to i nterpret, and its link to a firm’s desired credit rating is somewhat
obscure.
It is not forward-l ook i ng since it is based on historically observed data. T his can cause
problems for actively managed portfolios that show a much larger scope for variance due
267
© 2014-2023 AnalystPrep.
to flexibility in investment strategies.
T hey can be complex and difficult to understand and are actually not widely used.
T hey are not stable because the results depend on the loss distribution assumed.
Risk aggregation is the process of pooling risk data with respect to different risks with the aim of
obtaining an integrated risk profile for a firm. A bank gathers risk data from all the business units and
Establishing the degree of risk diversification between distinct risk types is subject to
techni cal and conceptual di ffi cul ti es. As a result, most banks rely heavily on ad-hoc
268
© 2014-2023 AnalystPrep.
sol uti ons and j udgments that may not be consistent with the measurement of the
individual risk components. For example, they may assume that no di versi fi cati on
effects exi st at all, meaning that risks do not show co-movement, and a firm’s total risk is
simply the sum of all individual risks. T hey may also choose to work with a constant
Validation of risk aggregation relies on subj ecti ve “expert j udgment” because there’s
Risk aggregation may end up underesti mati ng the overal l ri sk even if zero
diversification is assumed. T hat’s because the individual risk components are measured
without regard to interactions between risks (e.g., credit risk and market risk).
It is di ffi cul t to harmoni ze the ri sk measurement hori zon. For example, banks will
usually have to use the square root of the time rule to extend the shorter horizon applied
to market risk and work out an estimate of the annual economic capital required. T his
simplification can distort the calculation and result in values that do not reflect the ‘true’
Validation is the “proof ” that a model works as intended. Model validation is important for several
reasons:
Models are usually complex, embodying many components, and it is important to ensure
It helps to single out any assumptions that may not be in line with reality or those that may
Validation of economic capital models is sti l l a work i n progress, and most validation
269
© 2014-2023 AnalystPrep.
techniques for this particular purpose are at a very preliminary stage.
Although many techniques are coming up, none provi des evi dence for al l of the
Validation techniques generally show some performance i mbal ance. Most of them do
well in areas such as risk sensitivity but not in other areas such as overall absolute
In order for model validation to produce meaningful results, there is a need for firms to combine
Dependency modeling attempts to establish the extent to which obligors tend to default on their
obligations si mul taneousl y. T he correlation between obligors impacts the tail of the loss
distribution. T he more correlated the obligors, the more extended the tail. T his, in turn, increases
T here hasn’t been a si gni fi cant advancement in the methodologies applied by banks in
the area of dependency modeling over the past ten years. However, improvements have
been made in the infrastructure supporting these methodologies. For example, we now
have improved databases that house and document the underlying concepts, and there’s
also been better integration with internal risk measurement and risk management.
Model i ng the correl ati on between obl i gors conti nues to be an uphi l l task ,
parti cul arl y duri ng ti mes of stress. At the moment, values being used by banks are
nothing more than estimates that depend heavily on explicit or implicit model assumptions.
Measurement and management of counterparty credit risk play a critical role in an economic capital
framework. T he higher the counterparty credit risk, the higher the economic capital required.
270
© 2014-2023 AnalystPrep.
However, the measurement of counterparty credit risk represents a complex exercise:
Market-Risk-Related Challenges
simulations of market risk factors. T his helps map out all the possible scenarios. While at it, the
T he simulation models combine all positions in a portfolio into a single simulation such that
losses in another position in the same simulation run can offset gains from one position.
However, netting across counterparties is not permitted in practice, and therefore, a firm
has to estimate its obligations at the legally enforceable level of netting. T his increases the
A firm has to estimate counterparty risk exposure for multiple periods. However, market
risk VaR models usually narrow down the analysis to a single short-term period. As such, a
Credit-Risk-Related Challenges
In case a bank hasn’t transacted with a given counterparty in the past, it has to compute
the probability of default and loss given default for the counterparty and transaction.
volatility, leverage and getting information about investment strategies employed can be a
big challenge.
Even for counterparties with which a bank has some prior exposure, a bank still needs to
In most banks, the estimation of exposure at default (EAD) is independent of the estimation of the
probability of default (PD) and the loss-given default (LGD). Different teams may even carry out the
271
© 2014-2023 AnalystPrep.
two processes. However, the PD and LGD may rise at the same time, giving rise to wrong-way risk.
T his type of risk is difficult to identify and quantify because it requires a good understanding of the
market risk factors the counterparty is exposed to. Quantifying wrong-way risk is even more of a
challenge for banks that separate EAD estimation from PD-LGD estimation.
Operational-risk-related Challenges
When transacting with counterparties, a bank has to constantly monitor counterparty risk through
activities such as margining, marking-to-market, collateral calls, and more. T hese activities require
Quantifying operational risk is especially challenging when dealing with a new product or process. A
bank, in such an instance, also has to estimate losses that would result from infrequent but severe
events.
T here are two counterparty categories: T hose that have agreed to margin via a credit support annex
and those that have not. For both categories, a bank has to model counterparty risk and attempt to
forecast the most likely scenarios over a look-ahead period. For margined counterparties, the
forecasting period is short, associated with a reasonable "cure period" between when a counterparty
misses a margin call and when the underlying positions can be closed out. For non-margined
counterparties, the forecasting period is generally much longer – sometimes as long as the contract's
life.
T his variation in modeling horizons makes it difficult to aggregate risks across these two classes of
counterparties since most risk models take a single modeling horizon (e.g., one day for VaR models
and one year for economic capital models) for all positions.
Interest rate movement affects an institution's earnings by altering interest-sensitive income and
272
© 2014-2023 AnalystPrep.
expenses. Interest rates also affect the value of the underlying assets, liabilities, and off-balance
sheet instruments. T his is because the present value of future cash flows changes when interest
rates change. As such, a bank has to calculate the economic capital for interest rate risk.
Modeling interest rate risk is particularly challenging when the banking book items have embedded
optionality.
A bank may have extended consumer loans that give a borrower the option to pay back the
Deposits usually have no withdrawal restrictions, and the depositor has the option to
Such options make it difficult to predict future cash flows, complicating efforts to model interest
rate risk.
T he interest rate charge attached to a bank’s products is also affected by changes in the market rate.
A bank may have to reprice its products when the market rate changes in order to remain
competitive.
In order to make the most of internal risk measures that are not designed out of the need to meet
A bank should be in a position to demonstrate how such models are used in the corporate
decision-making process so as to assess how the model impacts the risks the bank takes on.
In addition, it is important that the board has a basic understanding of gross (stand-alone) and
net (diversified) enterprise-wide risk when assessing the bank’s net risk tolerance.
2. Senior Management
273
© 2014-2023 AnalystPrep.
Senior management should set the tone at the top and understand why it is important to use
economic capital measures when conducting a bank's business. Senior management should
take measures to ensure the meaningfulness and integrity of economic capital measures.
A bank should have sufficient documentation for its economic capital models and incorporate
them into decision-making in a transparent manner. Economic capital model results should be
taken seriously in order to be useful to senior management for making business decisions and
4. Risk Identification
identification process. T he quantification engine for economic capital must identify the
relevant risk drivers, positions, or exposures. Failure to do so creates room for slippage
5. Risk Measures
A bank should understand that all risk measures have their strong and weak points and that
6. Risk Aggregation
A bank should understand that the accuracy of the aggregation process depends upon the
between risks embedded in the measurement process. For an effective and informative
7. Validation
274
© 2014-2023 AnalystPrep.
T he validation of economic capital models should be conducted rigorously and
comprehensively. Besides, it should be aimed at demonstrating that a model is fit for its
purpose.
A bank should pay attention to the level of dependency embedded in portfolio credit risk
models. T his is because dependency impacts the determination of economic capital needs for
credit risk.
A bank should understand that there are tradeoffs to be considered in making a decision
between the available methods of measuring counterparty credit risk. Additional methods,
Any financial instrument with embedded options needs to be examined closely in order to
Benefits
275
© 2014-2023 AnalystPrep.
Constraints
T he credit quality of each borrower is determined in the context of the portfolio, not on a
stand-alone basis.
Risk-based Pricing
Benefits
A bank can maximize its profitability. For example, a bank can do so by making pricing
changes that focus on customer relationships and the enhancement of its reputation.
Constraints
All business units and projects have to be evaluated on the basis of the risk-adjusted return
on capital, meaning that some projects may not make the cut.
Benefits
profile customer segments on the basis of its profitability. Segments that are barely
Constraints
Analyzing many risks and aggregating them at the customer level may result in more
Coming up with segments of customers on the basis of the net return per unit of risk can
be quite difficult.
276
© 2014-2023 AnalystPrep.
Management Incentives
Benefits
targets.
Constraints
Evidence suggests that compensation schemes rank quite low among the actual uses of
Controls and governance have a major impact on the success of an economic capital framework.
Senior management has a duty to ensure that controls are put in place and used as intended and that
there’s a well-established governance structure that covers the entire economic capital process. For
Sufficient documentation for all tools used to measure risk and allocate capital.
Strong policies that serve to ensure that the established economic capital guidelines are
observed.
An outline of just how the established economic capital guidelines ought to be applied when
277
© 2014-2023 AnalystPrep.
capital framework.
A bank may opt for a centralized economic capital function that’s responsible for
process where each business unit manages its risk in accordance with the amount of
capital allocated.
A bank should measure economic capital and conduct assessment tests regularly – either
monthly or quarterly.
Although most banks apply stress tests to individual risks, the practice is not widespread
enough. In addition, there’s a need for integrated stress tests that allow a bank to assess the
Economic capital should not be the sole determinant of the required capital. Shareholders’
T here’s no common definition of available capital across banks, either within a country or
across countries. Many banks’ definition of available capital is T ier 1 capital or the capital
difficult.
Economic capital model results need to be transparent and taken. T his way, they will be
278
© 2014-2023 AnalystPrep.
Practice Question
Which of the following risk measure(s) is (are) NOT coherent because it (they)
A. Value at risk.
B. Expected shortfall.
C. Standard deviation.
T he correct answer is C.
Options A and D are incorrect. Value at Risk is incoherent because it fails to satisfy the
Option B is incorrect. Expected Shortfall (ES) is a coherent risk measure, but it fails on
elicitability, i.e., the ability of a risk measure to rank models’ performance based on a
scoring function.
279
© 2014-2023 AnalystPrep.
Reading 118: Capital Planning at Large Bank Holding Companies:
Supervisory Expectations and Range of Current Practice
After compl eti ng thi s readi ng, you shoul d be abl e to:
Describe the Federal Reserve’s Capital Plan Rule and explain the seven principles of an
effective capital adequacy process for bank holding companies (BHCs) subject to the
Describe practices that can result in a strong and effective capital adequacy process for a
Risk identification.
Corporate governance.
Capital policy, including setting goals and targets and contingency planning.
methodologies.
T he Capital Plan rule refers to a raft of regulations and policies developed by the Federal Reserve to
ensure that Bank Holding Companies (BHCs) have enough capital to withstand severe stress. It forms
part of a series of measures that were introduced following the 2007/2009 financial crisis aimed at
restoring stability and confidence in the finance sector. By definition, a bank holding company is a
corporate entity that controls one or several operating banks. T here are more than 100 BHCs in the
U.S., with the top 5 holding a combined asset value of well over $8 trillion.
280
© 2014-2023 AnalystPrep.
Figure 1 – Top 5 U.S. Bank Holding Companies by Asset Size $ (trillion)
Capital provides a cushion against unexpected losses and safeguards the continuity of the holding
company and its constituent banks. It serves as the first line of defense against losses. A BHC could
fail if its liabilities exceed its assets. T hat would cause massive financial turmoil and impose a burden
on taxpayers and deposit insurance funds. T he health of BHCs is heavily intertwined with the
T he Federal Reserve attaches a lot of importance to BHCs' internal capital planning processes.
Besides, it is committed to regular supervision to ensure that BHCs build resilience. T he Federal
Reserve runs a supervisory program called the Comprehensive Capital Analysis and Review (CCAR).
Under the CCAR, BHCs are required to submit an annual capital plan to the Federal Reserve Board.
Robust internal capital planning helps to ensure that BHCs have sufficient capital in a broad range of
future macroeconomic and financial market environments. T he attainment of this end calls for the
governance of all major capital actions. Among others, these include dividend payments, share
As per the Capital Plan Rule, all U.S.-domiciled, top-tier BHCs with total consolidated assets of $50
billion or more have to develop and maintain a capital plan supported by a robust process for
assessing their capital adequacy. To realize this endeavor, the Capital Plan Rule sets out seven
principles.
281
© 2014-2023 AnalystPrep.
Figure 2 - Seven Principles of an Effective Capital Adequacy Process
Risk Identification
BHCs should have a thorough risk identification framework to ensure that all risks are accounted for
when assessing capital needs. T he framework should look into both on and off-balance sheet
positions. BHCs should closely assess the effectiveness of all hedging strategies. T his is because
exposures and asset values can change rapidly in a stressed market. Senior management should
282
© 2014-2023 AnalystPrep.
regularly update risk assessments, review risk exposures, and carry out stress tests to map out
potential scenarios. While identifying risks, a firm should engage all senior representatives from
major lines of business, corporate risk management, finance and treasury, and other business and risk
functions. All the risks identified should support BHCs' capital adequacy assessments and may be
helpful in capital contingency plans as early warning indicators or contingency triggers, where
appropriate. Special attention should be given to new products that could have risks outside the
A BHC should have strong internal controls that help govern internal capital planning processes.
BHC's internal control framework should encompass its entire capital planning process. T hat
includes the risk measurement systems used to produce input data, the models used to generate loss
and revenue estimates, the reporting framework used to produce reports to the management and the
T here should be a regular independent review of internal controls by internal audit. T he audit should
include an extensive review of the full process, not just a few components. T his is important so as
to ensure that the process conforms to the expectations of the board and supervisors. Internal
controls also include model documentation and validation, where model estimates are continually
screened against actual results. For internal controls to add maximum value to the capital planning
process, it is important that all audit staff have the appropriate competence and a good understanding
of capital requirements.
Internal Controls
To facilitate the governance of their capital planning process, BHCs need an internal control
BHCs' internal control frameworks should address all aspects of the capital planning process,
283
© 2014-2023 AnalystPrep.
including the risk measurement and management techniques used to generate input data, the models
and other techniques used to produce loss and profit estimates, the aggregation and reporting
frameworks used to make reports for the management and the board, and the process for making
Internal Audit
Internal audit is required to play a key role in evaluating internal capital planning and its various
components. T he audit should conduct a comprehensive review of the process frequently to ensure
that the entire end-to-end process functions in line with supervisors' and BHC's board of directors'
expectations and in accordance with policies and procedures. Internal audits should review the way
in which deficiencies are identified, tracked, and remediated. Audit staff should be competent and
influential in identifying and escalating key issues. T he internal audit function should also report on all
BHCs are required to conduct independent reviews and validation of all models used in internal
capital planning, in line with existing supervisory guidance on model risk management. Validation staff
should be competent and independent from model developers and business areas to ensure they
An "outcomes analysis."
BHCs are required to maintain an inventory of all models used in the capital planning process,
including input or "feeder" models. T hese models produce estimates that feed into the models that
generate final projections of loss, revenue, or expense. Under stressed conditions, the validity of
models for estimating net income and capital should be considered since models designed for business
284
© 2014-2023 AnalystPrep.
activities already in place may not be appropriate. BHCs are also required to have a process for
In general, BHCs should pay more attention to model risk management, including enhancing practices
around model review and validation. Nonetheless, stronger capital planning practices have been
Ensuring that models had been validated according to the intended purpose.
BHCs should have comprehensive policies and procedures for the capital planning process. T he
policies and procedures should account for a consistent and repeatable process for all capital
planning process elements and ensure transparency to third parties regarding this process. T he
policies should be reviewed and updated frequently, at least once per year. T he management and staff
should adhere to existing policies and procedures with exceptions clearly outlined.
BHCs should implement an internal control system to ensure the integrity of reported results, as
well as the documentation, review, and approval of all material changes to the capital planning
process and its components. All the BHCs' capital planning process levels should be subject to such
Make sure management information systems (MIS) are robust enough to support capital
Establish reconciliation and data integrity processes for all key reports.
285
© 2014-2023 AnalystPrep.
Provide aggregate, enterprise-wide capital planning results that highlight any manual
adjustments made during aggregation and how they compensate for identified weaknesses.
Documentation
BHCs should document all aspects of their capital planning process, including their risk-measurement
and risk-management infrastructure, loss - and resource-estimation methodologies, the process for
making capital decisions, and the effectiveness of their control and governance functions.
Governance
A BHC’s board of directors has ultimate oversight responsibility over capital planning. Before making
capital decisions, the board should ensure that it is well-informed about all material risks and
exposures. It is immensely important that all board members and the board of directors have an
understanding of all material risks and exposures. For that reason, the board should receive an
updated risk assessment report at least quarterly. Capital adequacy information should include capital
measures under current conditions as well as the potential values during stressed conditions. All
scenarios brought to the board for consideration during the capital allocation process should include
sufficient details on how they were arrived at, including the assumptions made. T he board should also
receive a formal report about mitigation strategies to address key limitations and take action when
Senior Management
Senior management is responsible for ensuring that board-approved capital controls and planning
implementation process. Senior management should also ensure that there are effective controls
around the capital planning process and that stress scenarios are sufficiently severe and cover all
material risks. Proposed capital goals have sufficient analytical support and fully reflect the
286
© 2014-2023 AnalystPrep.
supervisors. Weaknesses in the capital planning process are evaluated for materiality. In addition,
Capital Policy
A capital policy refers to the principles and guidelines a BHC uses for capital planning, capital
issuance, and usage and distributions. It should include internal capital goals; quantitative or
qualitative guidelines for dividends and stock repurchases; strategies for addressing potential capital
shortfalls; and internal governance procedures around capital policy principles and guidelines.
T he main factors and key metrics that influence the size, timing, and form of capital
distributions.
Specific situations that would cause the BHC to suspend dividends, share splits, or stock
repurchase programs.
Factors the BHC would consider when contemplating the replacement of common equity
Key roles and responsibilities include individuals mandated with the production of
Capital goals should be compatible with a BHC’s risk tolerance, risk appetite statement,
Capital targets should be above BHC’s capital goals to ensure that capital levels will not fall
287
© 2014-2023 AnalystPrep.
BHC stress scenarios must incorporate macroeconomic and financial conditions that are specifically
tailored to stress a business' idiosyncratic and key vulnerabilities. Based on a business model, these
macroeconomic and financial conditions include a mix of assets and liabilities, geographic footprint,
portfolio characteristics, and revenue drivers. In this sense, a BHC stress scenario that primarily
T hose BHCs with well-developed scenario-design practices have carefully and creatively adapted
BHC stress scenarios for their unique business models, highlighting key sources of risk not included
in the supervisory severely adverse scenario. Furthermore, assumptions underlying BHC stress
Although recessions can have a detrimental effect on most BHCs' business activities, some BHCs
may have business models or important activities that generate vulnerabilities not accounted for by
scenario analysis based on a stressed macroeconomic environment (or for which a severely
depressed economy is not the primary source of vulnerability). T he BHCs should include elements
that address the key revenue vulnerabilities and sources of loss specific to their businesses and
T he recession incorporated in the BHC stress scenario and any additional elements meant to address
specific businesses or activities, such as a significant reduction in capital ratios compared with
baseline projections, would lead to considerable stress for an organization. Nevertheless, a BHC
stress scenario that results in lower capital ratios than those under the adverse supervisory scenario
is not, in and of itself, a safe harbor. T he stress scenario included in a BHC's capital plan should place
significant strain on revenue generation and loss absorption according to its unique risks and
vulnerabilities.
Variable Coverage
BHCs should include enough variables in their stress scenario to address all material risks related to
their exposures and business activities. BHCs must establish a consistent process for selecting the
final set of variables and explain the rationale behind their selection.
BHCs with well-developed scenario-design practices produced scenarios where the link between the
288
© 2014-2023 AnalystPrep.
variables included in the scenario and potential sources of risk was transparent and straightforward.
Clear narratives enhanced further transparency. On the other hand, BHCs with weaker scenario
design practices produced stress scenarios that excluded important variables in BHC's risk profile
Clear Narratives
Scenarios should be accompanied by clear narratives explaining how the scenarios address
vulnerabilities and risks facing the BHCs. BHCs with stronger scenario design practices provided
narratives describing the relationship between scenario variables and risks faced by the BHC's
business lines. Further, the narratives illuminated how the variables corresponded to the internal
risk-management model of the BHC. On the other hand, weaker practices provided narratives that did
Scenario analyses should yield estimates of losses, revenues, and expenses. BHCs should have stress
testing methodologies that generate credible estimates that are consistent with assumed scenario
conditions. In addition, estimates should be anchored upon empirical evidence, and the estimation
Although estimates should be derived from internal data, it is important to consider using external
data occasionally so as to make estimation models more robust. T he quantitative approaches used to
estimate losses, revenues, and expenses depend on the type of portfolio, the granularity, and the
length of available time series of data, as well as the materiality of a given portfolio or activity.
T he Federal Reserve allows BHCs to use a range of quantitative approaches to estimate losses,
revenues, and expenses. T he choice of the approach used depends on the type of portfolio or
activity for which the approach is used, the granularity and length of available time series of data, and
the materiality of a given portfolio or activity. T he Federal Reserve stops short of directing BHCs to
use a specific estimation method. However, each BHC is expected to estimate its losses, revenues,
and expenses at sufficient granularity to make it possible to identify the key risk drivers.
289
© 2014-2023 AnalystPrep.
Estimation for portfolios or business lines that are sensitive to different risk drivers or sensitive to
risk drivers, estimates of losses, revenues, or expenses, should be done separately. Judgment and
adjustments may be made to model estimates to account for risks that are not well captured in the
model.
Guidelines by the Federal Reserve require BHCs to clearly document their key methodologies and
assumptions that have been used to estimate losses, revenues, and expenses. Documentation should
clearly break down all the inputs and outputs, both qualitative and quantitative.
Loss Estimation
To estimate credit losses, BHCs can use either an economic loss approach (i.e., expected losses) or
an accounting-based loss approach (i.e., charge-off and recovery). Under the expected loss approach,
BHCs should come up with the probability of default (PD), loss-given default (LGD), or exposure at
default (EAD) and then analyze the determinants of each component. Under the accounting-based loss
approach, BHCs should include variables that represent the risk characteristics of the portfolio.
T hey should then estimate the statistical relationship between charge-off rates and macroeconomic
variables.
Operational Risk
Most BHCs determine operational risk by estimating the correlation between operational risk and
macroeconomic factors. If a statistically significant relationship between operational risk and macro
factors is not discovered, alternative estimation methods are employed. T hat includes using historical
data about the BHC’s own experiences to model possible scenarios and leveraging input from
management.
T he value at risk arising from operational events can be estimated using a modi fi ed l oss
di stri buti on approach (LDA). T he LDA involves estimating probability distributions for the
frequency and severity of operational loss events for each defined unit of measure, whether it is a
business line, an event type, or some combination of the two. Using Monte Carlo simulation, the
290
© 2014-2023 AnalystPrep.
estimated frequency and severity distributions are then combined to estimate the probability
291
© 2014-2023 AnalystPrep.
Market Risk and Counterparty Risk
BHCs with sizeable trading operations may incur significant losses from the market movement and
counterparty dealings, particularly under a stress scenario. Credit losses arise as a result of potential
deterioration in the credit quality or outright default of a trading counterparty. Market risk losses
move in risk factors such as interest rates, credit spreads, or equity, and commodity prices.
BHCs use two different techniques for estimating such potential losses - probabi l i sti c
determi ni sti c approaches that generate a point estimate of portfolio-level losses under a specific
stress scenario. BHCs that choose probabilistic approaches should offer evi dence that justifies
their choice. T hey have to demonstrate that such methods can yield more severe risk scenarios
compared to historical scenarios. BHCs that choose deterministic approaches should demonstrate
that they have considered a wi de range of possi bl e scenari os so as to adequately cover their
risk exposures. All assumptions employed in either approach should clearly be spelled out.
A key part of the annual Comprehensive Capital Assessment and Review (CCAR) submissions by
292
© 2014-2023 AnalystPrep.
BHCs is their Pre-Provision Net Revenue (PPNR). Pre-Provision Net Revenue is defined as interest
and non-interest income, less interest, and non-interest expense. PPNR represents a large number
of items on the income statement, including interest income on loans, and interest expenses related
to retail deposits.
Pre-Provision Net Revenue (PPNR) = Loan balances × Loan yields + Deposit balances × Deposit rates
= Net interest income + Non interest income − Non interest expense
As per the Capital Plan Rule, BHCs should estimate revenue and expenses over the nine-quarter
planning horizon. In line with that, BHCs should have effective processes for projecting PPNR
When projecting PPNR, BHCs should ensure they take the impact of regulatory changes on
performance and their ability to reach set targets into account. T he underlying assumptions for
revenues, expenses, and loss estimates should be theoretically and empirically sound. To ensure that
the model yields robust results, it's advisable to use a mix of external and internal data.
In all cases, BHCs should ensure that projections (i.e., PPNR, loss, balance sheet size, and RWA)
present a coherent story within each scenario. BHCs should strive to establish a clear,
comprehensible link among revenue, expenses, the balance sheet, and any applicable off-balance-
sheet items. In addition, they should document all methodologies and assumptions used to generate
these projections.
Balance sheet assumptions used to project net interest income should be consistent with balance
sheet assumptions considered as part of loss estimation. BHCs should ensure that net interest
income projections are based on methodologies that incorporate discount or premium amortization
adjustments for assets not held at par value and that would materialize under a range of scenarios.
Equally important is the need to consider the various impacts of the assumed scenario conditions on
their non-interest expense projections, including costs that are likely to increase during a downturn,
e.g., credit collection costs. T he focus should be on uncoveri ng k ey determi nants of individual
expense items and how sensitive such determinants are to changing macro conditions and business
strategies.
293
© 2014-2023 AnalystPrep.
BHCs should have a well-defined and well-documented process of generating projections of on- and
off-balance sheet items and risk-weighted assets (RWA) over a stress horizon period. It is important
to consider the drivers of changes in every balance sheet item under consideration (assets and
liabilities). Projections should not use favorable assumptions that do not conform to reality or near-
future expectations. A good example would be large changes in an asset mix that serve to decrease
the BHC’s risk weights and post-stress capital ratios but are not adequately supported by PPNR or
loss estimates. RWA projections should incorporate relationships between revenues, expenses, and
294
© 2014-2023 AnalystPrep.
Practice Question
Which of the following should a BHC model’s review and validation process NOT
include?
D. An outcomes analysis.
T he correct answer is C.
conceptual soundness (Option A). In addition, there should be ongoing monitoring that
includes verification processes and benchmarking (Option B). Last but not least, an
Note: T here are no policies and procedures in the model review and validation
processes.
295
© 2014-2023 AnalystPrep.
Reading 119: Capital Regulation Before the Global Financial Crisis
After compl eti ng thi s readi ng, you shoul d be abl e to:
Explain the motivations for introducing the Basel regulations, including key risk exposures
addressed, and explain the reasons for revisions to Basel regulations over time.
Explain the calculation of risk-weighted assets and the capital requirement per the original
Basel I guidelines.
Describe measures introduced in the 1995 and 1996 amendments, including guidelines for
netting of credit exposures and methods to calculate market risk capital for assets in the
trading book.
Describe changes to the Basel regulations made as part of Basel II, including the three
pillars.
Compare the standardized IRB approach, the Foundation Internal Ratings-Based (IRB)
approach, and the advanced IRB approach for the calculation of credit risk capital under
Basel II.
Compare the basic indicator approach, the standardized approach, and the Advanced
Measurement Approach for the calculation of operational risk capital under Basel II.
Financial regulations have developed over the years in response to stressful periods, which exposed
the weaknesses of the underlying regulations at that time. For instance, before the governments'
intervention, banks and insurance companies were created without official approval, and their
success or failure depended on their strength in persuading the clients. Moreover, they tried to build
a good reputation by ramming support from the prominent people in the community, accumulating a
296
© 2014-2023 AnalystPrep.
Financial failures from these institutions were frequent due to insolvency and a lack of client
confidence. T his was clear because when the failures occurred, the clients scrambled to withdraw
their funds, which, when the withdrawals caused panic, even the solvent institutions could fail if they
T hese failures prompted the first regulations, such as financial institutions unifying in case of excess
withdrawals. Additionally, early clearinghouses were established, which were partial arrangements
for mutual support. T he clearing houses had the right to private inspection and hence monitoring and
institutionalizing solvency.
However, the privatization of the clearinghouses’ inspection came with some drawbacks:
i. In the case that the panic was too massive, the entities that lacked the power to print money
lacked enough resources to support the financial system, and thus the clearinghouses and the
private banks were gradually replaced by the central banks as the lenders of the last resort.
ii. T he financial crisis proved to have a massive impact on the whole economy. T herefore, the
governments invested the necessary effort to keep financial institutions solvent enough to
iii. T he clients of the failed financial institutions were unhappy when they experienced losses
because, apart from fraud, customers complained of biases and difficulty in monitoring the
iv. Globalization of the financial systems, such as international coordination of the regulations,
International trade emerged in the 1960s and 1970s, which saw the growth of the multinational
corporation, and thus foreign exchange and capital flow increased. As a result, multinationals valued
the financial providers in many countries, giving rise to the following problems:
i. Huge financial institutions such as international banks are linked together such that if one of
the entities fails, this would have been amplified in many countries.
ii. T he banks and regulators recognized the competitive advantages and disadvantages as a result
iii. T here was a need for a sound technical procedure in clearing and settlements, such as
297
© 2014-2023 AnalystPrep.
delivery times for currencies and the inclusion of time zone differences.
Due to these problems, it was evident that the solution lay in official sector cooperation and
coordination. T herefore, the Basel Committee on Banking Supervision (BCBS) was formed in 1974,
Basel I accord was the specification for capital regulation developed in the late 1980s by the
members of BCBS (mostly G10 nations). T he accord was published in December 1987, agreed upon
in July 1988, and implemented by the end of 1992. However, by early 2000, it was recognized as a
Basel I bears no legal ground. Nevertheless, countries chose to include Basel I standards through
Motivations of Basel I
I. T he was a rise in international financial transactions even after Herstat Bank's fall in the
summer of 1974. In addition, the G10 countries unanimously agreed that banks should have
II. T here was an imminent competition between the international banks in their respective
countries. Since the minimum capital requirements varied substantially across countries,
there was a feeling that the banks in countries with minimum capital requirements had a
T he key features of Basel I are the minimum risk-based capital ratio and the numerators and
298
© 2014-2023 AnalystPrep.
T he Basel Committee on Banking Supervision's main goal is to ensure financial institutions have
enough assets to remain solvent in times of stress. T herefore, they had to find a way of measuring
this sufficiency.
T he specification of the minimum capital requirement in terms of the leverage ratio (ratio of capital
to book value of assets) would undermine the financial institutions with low-risk portfolios and
advantage those with high-risk capital ratios. T herefore, BCBS developed a risk-based ratio. T his is
T he risk-weighted assets included the assets on the balance sheet according to accounting
requirements, off-balance sheet exposures (such as loan commitments), and derivative exposures.
T he requirement of Basel I was that the consolidated bank should maintain the following ratios:
T ier 1 capital
> 4%
RWA
and
Total capital
> 8%
RWA
Note that the total capital is the sum of T ier 1 and T ier 2 capital, where T ier 2 should not be more
According to Basel I, T ier 1 capital is defined as common equity and disclosed reserves (retained
earnings and types of the minority interest in subsidiaries) less goodwill. Some of the later Basel I
299
© 2014-2023 AnalystPrep.
Hybrid investments such as subordinated debt.
T he proportion of the loan reserves to be included in the capital was lowered from 2% to 1.25% of
RWA.
300
© 2014-2023 AnalystPrep.
T he BCBS committee did not put forward the assumptions, but intuitively, the assumptions were:
I. T ier I capital was meant to maintain solvency. In contrast, T ier 2 capital would avail the
resources for the recapitalization of an institution in resolution and lower the impact of
failure on depositors.
II. T he general loan reserves were not considered as having the loss-absorbing capacity to
sustain solvency. However, loan reserves were often taken to cover losses that were
T he on-balance sheet amount of each asset was made risk-sensitive by multiplying each asset by the
N
RWA = ∑ W iAi
i=1
Where:
W i = Risk weight.
Looking at the risk weights, the maximum risk weight is 100%. Moreover, the government OECD has
0% weight implying that the government OECD would not default on its obligation.
301
© 2014-2023 AnalystPrep.
Example: Calculating the RWA
T he constituents of an Australian bank are 100 million AUD of American government bonds, 400
million AUD of loans to corporations, 300 million AUD of uninsured residential mortgages, and 250
million AUD of residential mortgages issued by the public sector. What is the value of risk-weighted
Solution
Using the weight ratios under the Basel I accord, the weights are as follows:
RWA = 0% × 100 + 100% × 400 + 50% × 300 + 20% × 250 = $600 million
As stated earlier, RWA includes balance sheet exposures, off-balance sheet exposures, and other
non-traditional on-balance sheet exposures, such as derivatives. T here are two steps to determine
(I) Multiply the principal amount by a credit conversion factor to get the credit equivalent (on-
(II) Multiply the credit equivalent amount by a risk weight depending on the nature of the
counterparty.
302
© 2014-2023 AnalystPrep.
Credit Off-Balance sheet Asset Category
Conversion
Factor
100 % Guarantees on loans and bonds, bankers acceptances and equivalents
50 % Warrantees and standby letters of credit related to transactions
20 % Loan commitments with an original maturity greater than or equal to one year
0% Loan commitments with original maturity less than one year
For example, OECD municipalities are subject to a risk weight of 20%. T herefore, a $200 million
five-year loan commitment to an OECD municipality implies that the credit equivalent amount is $40
million (i.e., 20% × $200 million). T he loan's contribution to RWA is thus $8 million (i.e., 20% × $40
million).
In the case of derivative instruments, the Basel I offered two methods of calculating credit
equivalent amount:
Note that the credit equivalent amount was amended in 1995 to include maturities of over five years.
T he steps of calculating the credit equivalent of the derivative under this method include:
I. Compute the current market value of the contract denoted by V. If the value of the contract
II. Add the amount denoted by D, which accounts for the changes in the contract’s future
market value:
D = 0.5% of the notional value of the swap for the remaining maturities of
303
© 2014-2023 AnalystPrep.
D = 1.5% of the notional value of the swap for maturities for more than
five years.
years.
D = 7.5% of the notional value for maturities greater than five years.
T he original exposure method was specifically for interest rate and foreign exchange contracts. T he
steps included:
I. T he nations could ignore the current market value of the contract and choose whether to
0.5% of the notional value for maturities of less than one year.
1% + 1% × INT [M-1] for the maturities greater than two years. Note that
It is worth noting that equity and commodity derivatives were not included in Basel I and that the risk
weight was determined based on the counterparty while making sure that no weight exceeded 50%.
304
© 2014-2023 AnalystPrep.
T he derivatives book of an American bank consists of three interest rate swaps, each with a notional
value of $200m and maturities of 0.5, 1.5, and 2.5 years, respectively. T he market value of swaps is
$30 million. Additionally, the book contains $300 million of foreign exchange swaps, with each $100
Solution
V = 30
CE = 30 + 2 + 11 = $43 million
T he handling of the derivative exposures by Basel I was basic. However, by 1995 much had changed
after the 1987 stock crash, the rising popularity of the VaR, and the existence of developed
quantitative market risk management systems. T he Market Risk Amendment revolved around the
issues of netting and capital for market risks associated with trading activities.
Netting
305
© 2014-2023 AnalystPrep.
International Swaps and Derivatives Association. T he agreement states that in case one party
defaults in its obligations, the transaction of the defaulting party with its counterparty is considered a
single transaction. Moreover, depending on the choices in the transactions, the agreement allows
bilateral transactions with negative and positive values to offset one another. For instance, if two
entities, A and B, enter an interest swap agreement against interest movements, the overall net
exposure in each of the entities is zero if the interest rate does not move. T hus the value of the
portfolio is zero.
T he initial Basel I did not incorporate the capital credit for netting. Conventionally, the changes in
interest would have an offsetting effect on the market value of two swaps. Still, Basel I would
include an add-on in each swap, discouraging the hedging strategy. T he reason behind this strategy
was that bankruptcy courts had not sufficiently tested the master agreements.
By 1995, the BCBS members were confident that “add-on” agreements would work as required.
T hus, the 1995 amendment incorporated reductions in the credit equivalent amounts when
When calculating the equivalent amount, complete netting of the market values of all positions was
allowed for each counterparty i, and add-on Dj for the future changes in the value was decreased for
each category of derivative j. So, the credit equivalent amount (CEA) is given by:
N
CEA = max (∑ Vi , 0) + ∑ (0.4 × Dj + 0.6 × Dj × NRR)
i=1 j
max (∑N
i=1 Vi, 0)
NRR =
∑N
i=1 max (Vi, 0)
∑N
i=1 max (Vi, 0) (denominator)is the market value with no netting.
NRR is the average across the positions. T he add-on and effect of netting differences across
306
© 2014-2023 AnalystPrep.
Example: Equivalent Credit Amount for Derivatives under the Market
Risk Amendment
Assume that a bank has a portfolio of four derivatives with two counterparties, as shown in the table
below:
Solution
N
CEA = max (∑ Vi , 0) + ∑ (0.4 × Dj + 0.6 × Dj × NRR)
i=1 j
Now,
N
max (∑ Vi , 0) = max (0, 9) = 9
i=1
Note that the current exposure portion of the credit equivalent is 9 for counterparty 1 because -5
exposure on the first interest rate is netted against 14 on the second interest rate. Moreover, the
current exposure for counterparty 2 is 0 current since exposure cannot be negative (-10).
Now,
max (∑N
i=1 Vi, 0) Current exposure 9
NRR = = = = 0.6429
∑N
i=1 max (Vi, 0)
Sum of positive Exposure 14
T he add-on for the potential future exposures is calculated for each derivative:
307
© 2014-2023 AnalystPrep.
Interest rate = 0.10% (100 + 200) = 0.3
Equity Option = 10% × 100 = 10
Wheat Option = 5% × 200 = 10
So,
T herefore,
N
CEA = max (∑ Vi, 0) + ∑ (0.4 × Dj + 0.6 × Dj × NRR) = 9 + 15.95 = 24.95
i=1 j
In the initial description of Basel I, market risk is left out. Recall that market risk is a potential
i. T he Standardized Approach.
T he internal model's approach is suitable for banks with material-size trading books because it
generated lesser capital requirements since the asset values were assumed to be uncorrelated as
I. Foreign exchange.
II. Commodities.
308
© 2014-2023 AnalystPrep.
V. Fixed-income securities and interest-rate derivatives other than options.
Note that the internal models-based method allows the banks to use their own internally developed
risk measures as the inputs. On the other hand, the standardized approach explained is based on
Under both approaches, capital charges were computed distinctively for specific risk (SR) and
general market risk (MR) for each of the five categories stated above. T he SR and the MR were
then summed and then multiplied by 12.5 so that the usual multipliers on RWA could be applied to
them. It is worth noting that 12.5 is the inverse of 8% so that the multiplier transforms the capital
requirements into an RWA measure with particular attention to total capital. So the capital charge is
given by:
5
Total capital for trading assets = 0.08 × 12.5 ∑ (MR j + SR j)
j=1
For a bank using an internal model-based approach, the bank is obliged to calculate the value at risk
(VaR) for each asset category. According to the amendment, a 10-day 99% VaR was required on one
Where:
m = multiplier that was not less than 3 (but can be more than 3).
Capital for specific risks required for fixed income could be computed using a standardized approach
or the bank’s internal models. However, if the internal models were used, the method would be
similar to that of market risk, but the multiplier would be 4 rather than 3.
T he 1996 amendment introduced T ier 3 capital, which consisted of unsecured subordinated debt with
an initial maturity of at least two years that could be utilized to cover the market risk requirements.
309
© 2014-2023 AnalystPrep.
However, only 70% of the market risk capital requirements could be covered by T ier 3 capital.
T he 1996 Amendment concentrated on the requirements of the banks using the internal model-based
approach. For instance, the amendment required the banks to have sound risk management and
Moreover, the 1996 amendment required daily backtesting. For each model, the banks were required
to compute one-day 99% VaR for each of the significant 250 days and draw a comparison between the
actual loss for the day and the VaR. If, for a day, the actual loss is larger than VaR, it is termed as an
excepti on. For less than five exceptions, the multiplier was taken to be 3. However, for more than
five exceptions, the supervisor has the option to choose larger multipliers, but for ten or more
By the mid-1990s, some supervisors had become alarmed that Basel I was not risk-based enough as it
claimed to be. For instance, the 100% risk weight included exposures that put the corporations at a
variety of risks.
Moreover, the banking crisis in Nordic countries proved that the issues could occur in banks with
sound chaptalization. Additionally, there was an advancement in market and credit risk quantification
and management from 1987 onwards, which incentivized accurate risk weighting and risk
As a result, Basel II was initiated in the late 1990s, and the finalized version was published in 2004 and
I. Basel II required capital for the operational, credit, and market risks.
II. T he risk weights formulas for the credit risk were to be based on modern credit risk
310
© 2014-2023 AnalystPrep.
III. T he Basel II introduced the three pillars:
b. Specific requirements for supervision related to capital and risk management (Pillar
2).
IV. Basel II introduced repeated utilization of Quantitative Impact Studies (QIS) to improve the
structure of the accord. In a given QIS, banks were required to provide crucial data to be
T he Basel II pillars formed a basis for common national practices. Pillar 2 mandated the supervisors
to ensure that banks possess more than the minimum amount of capital and the internal capital
adequacy process (ICAAP) that considers their risk profile. T he supervisors were required to act on
the earlier signs that the bank’s capital would go below the minimum by recommending appropriate
actions. Moreover, the supervisors were required to motivate the banks to improve the risk
311
© 2014-2023 AnalystPrep.
Pillar 3 advocated for more qualitative and quantitative disclosures with the aim that the market
participants’ pressures would assist in improving the bank’s practices. It required qualitative
disclosures such as corporate structure, the applicability of the Basel accord, and accounting
procedures. T he quantitative disclosures include the features of the bank’s capital, risk measures,
and exposures.
Some banks found the pillars challenging to interpret, and the disclosure practices were not uniform
over the years, but Basel emphasized them while providing more clarity.
Supervisors were wary that the banks could distort the interval risk measure to lower the required
capital due to a lack of sufficient supporting data and analysis in Basel II. T herefore, the negotiators
introduce three methods of determining minimum capital requirements to cover credit risk:
I. The Standardi zed Approach: Similar to Basel I, it included increased sensitivity of risk
312
© 2014-2023 AnalystPrep.
II. The Foundati on Internal Rati ngs-Based (IRB) Method: In this method, risk weights
were responsive to internal measures of default probability with the included regulatory-
III. The Advance IRB Approach: In this method, risk weights were sensitive to internal
T his method was directed to banks with internal risk measures and risk management systems that
were insufficient to support IRB methods. T he risk weights were sensitive to fluctuation in risks.
Unlike the Basel I standardized approach, whose risk weights were based on asset type and the
country to which the obligor belongs, Basel II risk weights depended on the type of the obligor,
ratings of some obligors, and asset types of some obligors. T he table below shows some examples of
As seen above, the weights were somewhat severe for the banks as compared to Basel I, but the
I. A supervisor could decide to assign a risk weight of 0% on the holding of the bank on its
sovereign debt issued in the home country’s currency. As a result, other nations could follow
II. T he claims that banks issued posed a risk weight of one category less favorable than the
sovereign or risk-weighted based on the bank’s ratings. All these risk weights were limited to
100%.
313
© 2014-2023 AnalystPrep.
i. A simple approach.
T he simple approach is similar to Basel I in that the risk weight of a counterparty could be
substituted with the risk weight of collateral for the proportion of the exposure covered by the
collateral. T he minimum collateral weight was 20%, except for the collateral, which sovereign debt
T he comprehensive approach required variations of the exposures and the collateral amounts to
incorporate the possible value deviations. T hat is, the risk weight of the collateral was applied to the
minimum collateral, and the counterparty’s risk weight was applied to the remaining exposure.
Moreover, netting was applied distinctively to exposures and collaterals, and either Basel or
T he IRB model is based on Gordy’s (2003) one-factor Gaussian copula model. Gordy postulated that,
given a well-diversified portfolio, there exists a positive link between the default probability of an
obligor and the obligor’s contribution to the capital required to cap the probability of the portfolio
T he Basel committee chose a one-year time horizon for the credit losses and desired 99.9th
percentile credit loss distribution. So the formula for the capital required for credit risk is given by:
Where
EADi = the exposure at default for an asset i, that is, the amount expected to be owed by the
LGDi = expected loss given default for asset i (expected proportion of EADi to be lost)
DR99.9i = default rate at the 99.9th percentile for a large portfolio of assets of category i. T his
−1
314
© 2014-2023 AnalystPrep.
−1
−1
√ρN (0.999)
DR99.9i = N [N (PDi) + ]
√1 − ρ
EL= expected loss (annual credit loss) on a portfolio and defined as:
T he Basel Committee did not consider the loan reserves as part of T ier 1 capital, but the loan
T herefore, the committee decided to make the capital the function of the unexpected losses (net
expected losses). When loan reserves were less than expected losses (EL), the capital was reduced
315
© 2014-2023 AnalystPrep.
Consequently, the Basel Committee was able to state the
loss percentile and the asset correlation (ρ) for each asset
category. Therefore, the contribution of each asset to the
capital was based only on the bank’s estimates of EAD,
LGD, and PD for a particular asset.
T here were two types of IRB models:
I. Foundati on IRB: the bank provided PD (probability of default) only, and the Basel accord
stated the values of EAD and LGD for each given type an asset. T his method was suitable for
large banks since most large banks had internal rating systems that could be used to
determine PD.
II. Advance IRB: T he banks specified all the three variables. Most banks did not use this
method due to the limited availability of the EAD and LGD data.
316
© 2014-2023 AnalystPrep.
Bank , Corporate, and Soverei gn Exposure Under IRB
Basel II assumes that the correlation (ρ) and the probability of default (PD) depend on Lopez’s (2004)
1 − e−50PD 1 − e−50PD
ρ = 0.12 [ ] + 0.24[1 − ]
1 − e−50 1 − e−50
Lopez’s model gives the relationship between the average asset correlation, the firm’s PD, and the
asset size. Looking at the formula above, the average asset correlation decreases as PD increases,
confirming that the default for high-risk borrowers is usually idiosyncratic. In contrast, middle-class
borrowers tend to default when the aggregate economy is in distress. Moreover, the safest
borrowers are also idiosyncratic, but their default rates (DR) are ignored since they are usually
minimal.
T he computation of capital for banks, corporate, and sovereign exposures incorporates the maturity
adjustment to account for the assets with maturity more than one year of remaining maturity, which
usually remains on the balance sheet at the end of the loss-forecasting period and may have lowered
b (M − 2.5)
MA = 1 +
1 − 1.5b
Where:
b = [0.11852 − 0.05478ln(PD)]2
Now, recall that the Basel II expressed the required capital in terms of RWA; the RWA for the
317
© 2014-2023 AnalystPrep.
Example: Calculating RWA under Basel II
An American bank’s assistance consists of $200 million BB-rated drawn loans. T he MA is estimated to
be 1.25. T he probability of default is estimated to be 0.02, the LGD is 40%, and DR is estimated to be
0.15.
What is the RWA for the bank with regard to the Basel II accord?
Solution
We know that:
T he retail exposures were calculated similarly to that of advanced IRB, only that there is no maturity
adjustment. Moreover, a set of three correlations are used: ρ=0.15 for the residential mortgages,
ρ=0.04 for qualifying assets (such as credit card balances), and other retail assets. T he correlation is
defined as:
1 − e−35PD 1 − e−35PD
ρ = 0.03 [ ] + 0.16[1 − ]
1 − e−35 1 − e−35
Looking at the formula, it is evident that the correlations are lower for retail than wholesale
exposures.
An American bank’s assets consist of $200 million BB-rated drawn loans. T he probability of default is
What is the RWA for the bank with regard to the Basel II accord?
318
© 2014-2023 AnalystPrep.
Solution
Recall that retail exposures were calculated similarly to that of advanced IRB, only that there is no
A credit substitution was utilized for arrangements such as guarantees and credit default swaps. T his
method involved substituting the credit rating of the guarantor for that of the obligor in the capital
However, this method has low sensitivity to actual loss occurrence because it involves double
default (guarantor and the borrower). Nevertheless, Basel II assumes low correlations of the
wholesale counterparty defaults, and thus the frequency of double defaults is low.
Alternatively, the Basel Committee amendment in 2005 allowed capital without the mitigant to be
According to the Basel Committee, “operational risk is the risk that occurs due to inadequate or
Basel II implemented three methods of determining the capital required for the operational risk:
I. Basi c Indi cator Approach: T his method computes the capital for the operational risk as
15% of the bank’s average annual gross income over the past three years while ignoring
II. Standardi zed Approach: T his method is similar to the basic indicator method, but the
multipliers are distinct for each business line, and then the average is calculated.
III. Advanced Measurement Approach (AMA): T his method involves using internal models
to compute a one-year 99.9% VaR (a measure of operational risk losses at the 99.9th
319
© 2014-2023 AnalystPrep.
percentile). T he operational risk capital is 99.9% VaR, less expected operational losses.
T he sample of four business lines, their corresponding multipliers, and gross income (in millions) is
What is the value of the required capital for operational risk under the Basic Indicator Approaches
Solution
Under the Basi c Indi cator Approach, the bank computes the capital for operational risk as 15%
of the bank’s average annual gross income over the past three years while ignoring years that
So,
Note that the multiplier column has been excluded since we do not need it here. T herefore, the
320
© 2014-2023 AnalystPrep.
80 + 10 + 120
0.15 [ ] = 10.5 million
3
Under the Standardi zed Approach, it is similar to the basic indicator method, but the multipliers
Note that the negative capital offsets the positive capital within a given year and is thus ignored. So
1
[13.6 + 20.6] = 17.10 million
2
T he banks that choose to use the AMA approach are required to approximate the distribution of the
operational risk losses in seven classes that include both the estimates of occurrences and severity
Although banks use different AMA approaches, the most used methods are:
321
© 2014-2023 AnalystPrep.
I. Parametri c and Monte Carl o Approach: Such data are the probability distribution of
occurrences (such as Poisson) and severity (such as Weibull). T hese distributions are used
to simulate a large number of loss observations, and then the 99.9th percentile data is
obtained.
II. Scenari o Anal ysi s: A relatively small number of scenarios in which losses are
experienced is generated, and operational loss is measured in each scenario for each class of
Scenario analyses are merited to create an informative scenario, and they look into the future
(forward-looking). However, the data used is small, and thus generating the 99.9th percentile loss is a
challenging task.
Minimum capital requirements for insurance companies are present in many countries. However,
there are no international standards, but the United States and the European Union have
requirements that predicted some features of Basel II. Moreover, capital requirements on the risky
assets depended on the ratings given by NAIC on each asset, which was additional to capital
requirements on the liabilities. T he insurance level was at the state level, where most of the states
companies in the European Union. T he first capital requirement at the EU level was termed
Solvency II is similar to Basel II in many aspects. For instance, the capital requirements are based on
I. Quantitative requirement.
322
© 2014-2023 AnalystPrep.
III. Disclosure and transparency.
Moreover, when underwriting the risks, market, credit, and operational risks are taken into
consideration, which is further segmented into risks originating from life, property, casualty, and
health insurance.
Solvency II also borrows some Basel II elements. For instance, it requires controls on capital. In
case an insurance company breaks the Solvency II minimum capital requirement (MCR), the
supervisors are advised to stop the stressed firm from accepting new policies or putting them into
resolution. When a firm is put in resolution, it can either be liquidated or sold to a stronger firm.
T he required buffer above the minimum capital requirement (MCR) is provided by the Solvency
Capital Requirement (SCR), which is less than MCR. In case the SCR requirement is breached, the
concerned insurance company should give a detailed recapitalization plan, and the supervisor might
It is worth noting that Solvency II uses both standardized and internal model-based approaches to
compute SCR. However, the models used must take into consideration the following factors:
iii. T he risk assessment must be calibrated based on the target criteria the regulator sets.
Solvency II can be made to fruition through a combination of T ier 1, T ier 2, and T ier 3 capital.
323
© 2014-2023 AnalystPrep.
Practice Question
T he Basel Committee defined operational risk as the risk that occurs due to inadequate
or failed internal processes, people, and systems or from external events. Which of the
A. Basi c Indi cator Approach: Computes the capital for the operational risk as 15% of
the bank’s average annual gross income over the past three years while ignoring years
B. Standardi zed approach: Computes bank’s average annual gross income over the
past three years while ignoring years that resulted in negative gross income using the
C. Advanced Measurement Approach (AMA): Computes the required capital for the
operational risk as l 99.9% VaR measured using internal models less expected operational
losses.
T he standardized approach computes a bank’s average annual gross income over the past
three years while ignoring years that resulted in negative gross income using the
di fferent mul ti pl i ers i n each asset. Opti ons A, and C are i ncorrect: T he
324
© 2014-2023 AnalystPrep.
Reading 120: Solvency, Liquidity and Other Regulation After the Global
Financial Crisis
After compl eti ng thi s readi ng, you shoul d be abl e to:
Describe and calculate the stressed VaR introduced in Basel 2.5, and calculate the market
Explain the process of calculating the incremental risk capital charge for positions held in a
Describe the comprehensive risk (CR) capital charge for portfolios of positions that are
Required T ier 1 equity capital, total T ier 1 capital, and total capital.
Describe the motivations for and calculate the capital conservation buffer and the
countercyclical buffer, including special rules for globally systemically important banks (G-
SIBs).
Describe and calculate ratios intended to improve liquidity risk management, including the
Describe the mechanics of contingent convertible bonds (CoCos) and explain the
Explain motivations for the “gold plating” of regulations and provide examples of legislative
and regulatory reforms that were introduced after the 2007–2009 financial crisis.
T he 2007-2009 financial crisis exposed flaws and informed the establishment of solvency and
liquidity regulations. Moreover, it exposed the market practice and product structures that were not
able to withstand stressful periods. As a response to these flaws, global regulators came up with
325
© 2014-2023 AnalystPrep.
more strict regulations and supervision.
T he Financial Stability Forum was a body that frequently conducted research. Later, it transformed
into the Financial Stability Board (FSB). FSB consisted of representatives from different sectors,
such as central banks, prudential regulators, and finance ministries. FSB became the body that
approves international standards. However, after the crisis, FSB concentrated on other issues. Note
that other institutions, such as the Basel Committee, retained their independence and authority.
T he Global Financial Crisis of 2007-2008 proved that the minimum capital charges under the market
risk amendment were not sufficient to address trading-book risks. T his was evident in that, during the
crisis, the market prices of financial assets fell sharply, hedging strategies failed, and there was doubt
about securitizations.
Consequently, the Basel Committee reacted by introducing three main changes by the end of 2011:
roughly.
II. T he committee improved Var computations to include the stressed Var components.
Numerous banks calculated capital based on the market risk amendment using historical simulation.
Remember that the market risk amendment involved the calculation of 1-day VaR by drawing the
daily changes from the recent history and then scaling the same by √10. However, VaR slowly
decreases during the low volatility periods because the historical observations were small in value.
Moreover, during high volatility periods (such as in 2007 for the majority of assets), historical VaR
was slow to reflect since analysts took historical observations from low volatility periods.
Consequently, the Basel Committee recommended the stressed VaR. Instead of drawing daily
326
© 2014-2023 AnalystPrep.
observations from the recent historical time, a bank was advised to pick a one-year (250-day period)
period from the last seven years that shows stress to its current portfolio. T he VaR for this period
would be intuitively high and unchanged unless the period of low volatility remains for seven years.
MR 2. 5 = max (VaR t−1, m × VaR avg ) + max (SVaR t−1 , ms × SVaR avg )
Where:
VaR t−1 and VaR avg are traditional 10-day, 99% VaR computed from the previous day and average from
SVaR t−1 and SVaR avg are computed for equivalent times as above but from the most stressful period
m and ms are the multipliers, which according to the 1996 amendment, should, at least, be equal to 3.
It is worth noting that if the multipliers are equal for both the traditional VaR and stressed VaR, then
Incremental Risk Charge (IRC) was first introduced in 2005 as a result of regulation on arbitrage
opportunities between the banking and trading book. Besides, regulators introduced IRC as a result of
Specific risk charges were meant to cover the default risk and some idiosyncratic risks. However,
by the early 2000s, banks realized that even in the presence of specific risk charges, the majority of
banking-book exposure needed lesser capital requirements in the trading book than in the banking
book. Consequently, banks categorized illiquid assets with predisposing default risk in the trading
book.
As a response to this drawback, the Basel Committee proposed an addition of incremental default risk
327
© 2014-2023 AnalystPrep.
I. An internal model of default risk structured to the 99.9th percentile at a one-year horizon
II. In the absence of an internal model, a standardized current exposure approach similar to that
Towards the end of the crisis, the committee realized that portfolio losses were mainly associated
with credit resulting from changes in ratings, credit spreads, or liquidity, not defaults. T herefore, the
IRC was an amendment to include the changes in ratings in that the 99.9th percentile was maintained,
but the banks were required to approximate losses due to rating downgrades. T he credit quality of a
portfolio was approximately held constant by virtue that a position replaces the downgraded position
of defaulting one with the same pre-downgraded rating. T he replacement period differs across the
positions based on the liquidity level but not less than three months.
Gordy’s (2003) model in Basel II assumes that the correlation parameter is constant across the
obligors (but not across assets) over time. T his assumption is only relevant to the debt instruments
portfolio so that the banking-book capital can be determined effectively. However, the assumption
does not apply to correlation book securitizations and derivatives written on securitizations.
T he correlation book puts a portfolio in a special purpose vehicle and generates tranche liabilities
that differ in seniority and, therefore, in their credit losses. Conventionally, correlations change over
time and, as such, in this context, change the value of the trances. For instance, the AAA-rated
tranches had a low default rate in the pre-crisis period, but the PD changed in times of crisis, and
To address this issue, the Basel Committee substituted the incremental risk charge (IRC) and the
specific risk charge with the comprehensive risk (CR) charge for the correlation book. Under this
new dimension, banks may use a standardized approach that only depends on the ratings of the
328
© 2014-2023 AnalystPrep.
Note that the percentages are not the risk weights but rather the capital as a fraction of exposure.
Moreover, since re-securitizations are more valuable to correlation changes and hence higher capital
requirements, the tranches that fall into rating BB and below attract more capital charges because
T he Basel Committee proposed that banks could also use the internal model to compute the CR
charge only after approval from the supervisors. T he model-based charge may not be less than the
fraction under the standardized approach, and it is more complicated than the standardized approach.
Apart from the weaknesses of the Basel II stated earlier, the global financial crisis unraveled more
I. During the crisis, market participants were mainly concerned with the tangible T ier 1
common equity capital, which was the capital that could cover losses and maintain banks as a
going concern. However, the Basel definition of the said capital was limited to the purpose of
II. T he official sector of the market believed that distress in some banks greatly affected
society more than in other banks and that those significantly affected should have the ability
to manage the distress. In other words, systematically important financial firms were
III. T he risk-based capital ratios were presumed to be vulnerable to gaming. For instance, the
concentrated on tangible common equity capital, which was tailored based on leverage
ratios.
IV. Banks were unable to be solvent up to the level of maximum losses. In other words, banks
were operating as a going concern, so they required huge capital after covering losses. One
of the unpopular sources of capital was the government. T herefore, banks’ capital buffers
sometimes failed. T his happened because the entities’ solvent reserves were too inadequate
329
© 2014-2023 AnalystPrep.
to cope with the withdrawals. Wholesale funding was, therefore, not stable and, as such,
VI. T he capital to cover the credit risk was needed, especially after Lehman's failure.
To address the above flaws, BCBS published proposals in December 2010 as “Basel III: International
framework for liquidity risk measurement standards and monitoring” and in June 2011 as “Basel III: A
global regulatory framework for more resilient banks and banking systems.” T he components of
330
© 2014-2023 AnalystPrep.
Basel III Capital Definition
331
© 2014-2023 AnalystPrep.
T he Basel III removed T ier 3 Capital and divided T ier 1 capital into:
Additional T ier 1 Capital (this meant that the Core T ier 1 capital has more quality than
T he total T ier 1 capital (Core T ier 1 plus Additional T ier 1 capital) must be at least 6% of
RWA.
T he total capital (T ier 1 plus T ier 2) was unchanged by at least 8% of the RWA.
Common equity.
Retained earnings.
Limited amount of minority interest and unrealized gains and losses, goodwill and
other intangibles, deferred tax assets, and any other shortfall reserves based on
IRB expected losses are subtracted from the T ier 1 Equity Capital.
subordinated to depositors and subordinated debt callable after five or more years.
III. T ier 2 capital was structured to cover losses after failure, thus protecting the depositors and
332
© 2014-2023 AnalystPrep.
to depositors and subordinated debt with five or more years of maturity and
General loan loss reserves. T hese were not allocated to absorb losses on specific
T he general feature of the Basel III accord is that compared to Basel II, its capital requirements
were higher. T his was because the minimum ratios were increased and allowable capital restricted.
Before the Basel III accord, the minimum capital ratios were expressed by the Basel Committee as
the percentage of RWA. However, in the post-crisis period, many observers noted that this had
underestimated the risks faced by banks and hence overleveraged. Although flaws in the calculation
of RWA were addressed, its future errors were imminent. Moreover, market participants
concentrated on simple ratios of equity to unweighted assets as they determined the soundness of
Reacting to the above flaws, the Basel Committee proposed a “simple” leverage ratio capital
requirement as a supplement to the risk-based requirements. T his simple leverage capital ratio
requires banking organizations to maintain a ratio of Core T ier 1 Capital to Leverage Exposure of at
least 3%.
Note that the leverage exposure incorporated both the on-balance-sheet and off-balance-sheet
assets. T he handling of off-balance-sheet assets by the committee was somewhat different from
IFRS and GAAP accounting principles. Even then, it was much more detailed to allow comparison
333
© 2014-2023 AnalystPrep.
across different countries.
T he Financial Stability Board (FSB), in cooperation with the International Association of Insurance
Supervisors (IAIS), publishes the list of the globally systematically important banks (G-SIBs) and
globally systematically significant insurers (G-SII). In some cases, some nations assign some banks to
be G-SIBs.
G-SIBS, G-SIIs, and other firms are classified as systematically important financial institutions
(SIFIs). For a firm to qualify as SIFI, FSB determines the size, complexity, interconnectedness, and
other factors of the firm. A firm is regarded as SIFI if its failure or distress is amplified to the whole
financial system or the real economy. For instance, the whole financial market and its counterparties
SIFIs are often regarded as “too big to fail.” Still, the main objective of this is to lower the probability
of it failing and make it not disrupt the real economy of the financial system in case it fails.
Moreover, in the event of failure, the shareholders will be eliminated, and creditors will experience
losses. However, a SIFI should continue operating and should recapitalize without government
assistance. Lastly, SIFIs are subject to a wide range of supervision and regulation.
Buffers
By early 2019, Basel had specified the requirements for capital above the minimum fractions of
II. A countercyclical capital buffer (CCyB) that varies at the discretion of the national
III. An additional G-SIB requirement is dependent on the organization's score based on the
Committee’s method of identification of G-SIBs. T he additions were 1%, 1.5%., 2%, 2.5%,
and 3.5%. T he rationale for this buffer is similar to that of a capital conservation buffer
(CCB), only that it recognizes the impact of the distress at G-SIB on society.
334
© 2014-2023 AnalystPrep.
Motivations for Capital Conservation Buffers
T he rationale in the case of countercyclical capital buffer (CCyB) is similar to the US regulation
Prompt Corrective Action (PCA) of 1991. T his regulation stipulates that a bank with ratios nearing
the minimums should be subjected to more stringent supervision in a bid to return it to a well-
capitalized status. T he restrictions induced by the committee were restrictions on dividend and
T he rationales in the case of CCyB were to give an instrument for macroprudential control of
overheating and address the cost of capital. Overheating, in this case, implies that a higher capital
requirement limits the credit supply by the banks and thus causing overheating in the credit market,
lowering the maximum point of the credit cycle, and thus reducing the frequency and severity of the
financial crisis. T he demerit of this rationale is that the calculation of CCyB for a bank with
international activities is sophisticated because CCyB differs across the nations. In addition, a bank
with international operations might be required to give a combined average of CCyB as a weighted
In the case of cost of capital, the rationale supposes that the cost of a bank’s increase in its capital
ratio is less in good times than in bad times. For this reason, financial stability can be increased by
achieving a lower cost by increasing CCyB during good times and lowering it during bad times.
For the case of G-SIB, the rationale is similar to that of CCB, only that it recognizes the impact of the
distress at G-SIB on society. T herefore, higher buffers are stated to lower the likelihood of further
failure.
T he BCBS finalized the reforms in December, including the revisions of the previous accord. T hese
revisions include:
335
© 2014-2023 AnalystPrep.
IV. T he CVA framework for counterparty credit.
Additionally, the output floor was introduced. T his made sure that the capital calculations under the
ratings-based and other models are capped at not less than 72.5% of the standardized approach.
I. Risk weights have been adjusted so that one set of weights is based on external rating
agencies and the other credit risk assessment (grades A, B, or C). Credit assessment risk
weights are used when a country does not allow external ratings to be used for capital
measures. T he weights were 20% of RWA for AAA and over 150% of RWA for B- and below.
II. T he covered bonds that meet specific criteria carry a risk weight of between 10% and
100%.
III. Corporate bonds carry a weight of 20%, 50%, 75%, 100%, and 150%, depending on the
ratings. In nations where ratings are not allowed, a weight of 65% applies to the investment
grade and 100% to the non-investment grade. Moreover, favorable treatment is accorded to
IV. Specialized lending is composed of several buckets, such as project finance or object
V. T he risk weight of the equities is 400% risk weight, and the sub-debt or other instruments
VI. New risk weights were embedded in real estate tied to the value and type of loan.
VII. New credit conversion factors were created for the range of off-balance sheet exposures.
VIII. Default was defined-payments past due 90 days, non-accrual assets, write-offs in anticipation
of default, sale of asset loss, distressed restructuring, bankruptcy, and failure to pay without
recourse to collateral.
I. Banks should use IRB for all assets in a given asset category and cannot cherry-pick some
336
© 2014-2023 AnalystPrep.
exposure to be covered under SA alone and IRB for others.
II. Input floors for LGD computations are provided for corporates, with a 25% minimum LGD
retail exposures have a minimum LGD risk weight of 50% on credit cards, 30% on other
III. Minimum unexpected loss (UL) risk weights apply to specialized lending, and collateral
IV. IRB is not to be used for large corporates or banks where modeling is challenging, as shown
V. T he classes include corporate, sovereign, bank, retail, and equity. Retail is further divided
into three subtypes, and specialty lending is classified into project finance, object finance,
A standardized approach replaces the Basel II approach to calculating operational risk. T he main
characteristics include the business indicator (BI) and the business indicator component (BIC). BIC is
equal to BI multiplied by the internal loss multiplier (ILM). ILM is the scaling factor based on
BI = ILDC+SC+FC
Where:
337
© 2014-2023 AnalystPrep.
FC = Abs(Net P&L T rading Book) + Abs(Net P&L Banking Book)
0. 8
⎡ LC ⎤
ILM = ln exp (1) − 1 + ( )
⎣ BIC ⎦
Financial institutions that seem to be solvent can be exposed to running due to withdrawal by the
depositors and counterparties. T he withdrawals, in such instances, exceed what assets can cover.
Regardless of the cause of the run, authorities desire to find a solution without involving the
government.
T he LCR is structured to give the banks and authorities a month to address the crisis by selling liquid
assets. T he concept behind LCR is that if a bank has liquid assets (assets that can be sold quickly and
are reasonably priced) whose sum value exceeds what it needs to cover liquidity requirements, it
T he HQLA is determined by classifying assets and applying haircuts based on the likely availability of
buyers at prices near normal-times values. For instance, HQLA with no haircuts refers to the
deposits at the central banks, government-issued securities (with 0 risk weight), and equity, which
has 50% haircuts. Individual mortgage loans are excluded from the HQLA category.
T he NSFR utilizes a one-year time horizon and focuses on what remains after the stressful period
T he available amount of stable funding is equivalent to the product of the amount in several
categories and the available stable funding (ASF) factors (which can be thought of as haircuts). Note,
however, that the categories are different from that of LCR. T he required amount of stable funding
is calculated by multiplying the amounts in each category of assets by the required stable funding
(RSF) factor. T he RSF for liquid assets is higher than that of illiquid assets.
T he Bank of Afrika has liabilities of USD 800 million of stable retail deposits with not more than six
months remaining to maturity. T he bank has a 4-month certificate of deposit of USD 400 million, with
each quarter maturing each month. T he bank has USD 300 of 8-year senior bonds with none
maturing in the next year and USD 200 common equity. T he several ASF factors for these liabilities
On the assets side of the bank, it has USD 200 of valued cash, USD 200 debt of its sovereign, USD
200 of corporate debt securities rated BBB in the trading account, and USD 800 of loans to business
with eight months remaining maturity. T he respective RSF factors are 0%, 10%, 50%, and 80%.
339
© 2014-2023 AnalystPrep.
What is the value of the net stable funding ratio (NSFR)?
Solution
Banks are required to compute the credit adjustment (CVA) for each derivative counterparty. T he
CVA is defined as the difference between the value of the risk-free portfolio (of that particular
counterparty) and the value of the actual portfolio. Intuitively, CVA rises with the counterparty’s
credit spread. Also, CVA is affected by the portfolio market value variation, which in turn affects the
profit.
Banks are still prone to failure even after all the Basel Accords and other reforms. To reduce the
effects of such failures, FSB agreed in 2014 that the national resolution regimes for G-SIBs would
possess 12 attributes. In addition, FSB resolved that each G-SIB should have the sufficient total loss-
recapitalization can be achieved by bail-in. T hat is, a specific wholesale debt liability is written down
or converted to equity where the terms of conversion are spelled out into the indentured
convertible bonds, which need conversion when the banks seem to be solvent. T he bail-ins are
340
© 2014-2023 AnalystPrep.
overseen by national law, and the authority determines its structure after taking control of the bank.
CoCos
Contingent convertibles (CoCos) are debt instruments that can convert a bond into equity or stock
once a specific strike price is breached. Convertible bonds are issued by a firm that desires to avoid
dilution of issuing equity prior to the improvement of its performance. At the holder's option,
convertible bonds can be converted into equity when a firm’s share price rises above the limit
Contingent convertible bonds (CosCos) cause a bank’s equity to rise when distress occurs, as shown
by triggers written into the indenture and not the option of the option. With CosCos, equity
Some of the varied triggers include when the ratio of Core T ier 1 Capital to RWA falls below the
limit or when a bank’s main regulator declares the nonviability of the CosCos. CosCos can be
incorporated in Additional T ier 1 if the limit set is more than or equal to 5.125% or other included in
T ier 2.
Cocos are debt instruments when issued, but the holders receive less or no returns as compared to
equity holders when a bank performs well. On the other hand, holders bear losses that are almost
similar to the equity holders when a bank fails and, therefore, seem expensive to the banks when
they issue.
However, the accounting advantage of the CosCos lies in the fact that they are not put in the equity
account until they are converted, and, therefore, a bank can report high profits on equity.
Living Wills
Some countries required G-SIBS to prepare a detailed resolution framework that expounds on how to
capitalize themselves when in distress, how they would fund themselves, and how they would
continue operating as a going concern even when subsidiaries fail, among other critical issues.
341
© 2014-2023 AnalystPrep.
Applications of Basel
T he Basel Accord (I, II, III) achieved a significant level of uniformity across countries, but there are
some key differences. For instance, there exist differences in accounting standards, bankruptcy
laws, and regulations. Notably, even with an existing agreement in Basel, some jurisdictions apply
Even though the participating countries are not required to impose lenient domestic laws and
regulations on globally active banks, they may impose requirements that are relatively different but
higher than the Basel Accord. T his is beneficial for the Basel negotiations because it grants freedom
to those countries that want stringent standards that are applied domestically.
Basel expects that on top of its minimum standards, each country has the mandate to oversee banks
and take appropriate actions to ensure that they have sufficient capital and liquidity and efficient risk
management and governance. For instance, in the US, supervised stress tests based on supervisory
models and scenarios are conducted in a bid to make sure that banks have a capital and liquidity
planning process, risk management, and sufficient buffers to allow compliance with minimum capital
and liquidity standards, even when they (banks) are undergoing a stressful period.
T he reforms include:
I. T he FSB promulgated the guidelines for better compensation practices after it was realized
that the pre-crisis compensations at giant banks were independent of risk-taking, resulting in
II. T he capacity to execute macroprudential policy was added through institutional reforms in
some nations (where legal authority was unavailable). For example, in the UK, the Financial
Policy Committee was formed at the Bank of England with significant authority to take
342
© 2014-2023 AnalystPrep.
macroprudential policy actions and recommend others to parliament.
III. T he US’s Volcker Rule (which was part of the Dodd-Frank Act) restricts individual trading
and investment in head funds and private equity at deposit-taking firms. T he main reason for
this is that banks should not be allowed to speculate when insured depositors fund them. T he
Volcker rule is hard to implement because it is difficult to identify the reasons for trading and
IV. In the US, the Securities and Exchange Commission formed the Office of the Credit Ratings
was formed to give limited oversight of rating agencies since the pre-crisis rating agencies
VI. In the US, huge banks were mandated to have a board of risk committees consisting of, at
least, one member with extensive risk management experience in a large financial firm.
VII. T he issuers of the securitizations in the United States and the European Union were
required to keep, at least, 5% of each tranche in order to, at least, align the incentives of the
VIII. T he mortgage lenders in the US were mandated to assess whether borrowers can service
the loans they acquire. Legal repercussions of faulty determination of mortgage borrowers
IX. In the US and EU, some OT C derivatives ought to be traded on swap execution facilities
(SEFs). T he SEFs are electronic platforms that promote price transparency. Moreover, the
derivative done between the financial institutions must be overseen by the central
counterparties (CCP).
343
© 2014-2023 AnalystPrep.
Practice Question
Which of the following statements is correct about the stressed VaR in Basel 2.5?
A. Stressed VaR is calculated by multiplying 1-day VaR from the recent daily variation in
values by \sqrt{10}.
B. Stressed VaR is drawn from one year from the most recent seven years that exhibited
C. Stressed VaR is drawn from one year from the most recent ten years that exhibited
A bank was required to identify a one-year (that is, 250 trading days) period from the
latest seven years that was most stressful for its current portfolios.
Opti on A i s i ncorrect: T his was the method of calculating the market risk amendment
Opti on C i s i ncorrect: Basel 2.5 required banks to identify one year from the latest
seven years (not ten years) that was most stressful for its current portfolios.
344
© 2014-2023 AnalystPrep.
Reading 121: High-level Summary of Basel III Reforms
After compl eti ng thi s readi ng, you shoul d be abl e to:
Explain the motivations for revising the Basel III framework and the goals and impacts of
Summarize the December 2017 revisions to the Basel III framework in the following
areas:
Describe the revised output floor introduced as part of the Basel III reforms and
T he Basel framework is part of a raft of measures that have been introduced following high-impact
financial crises in recent years. T he overall goal is to strengthen the banking system and avoid
systemic vulnerabilities.
T he initial phase of the Basel III framework was announced in 2010. It focused on the following
objectives:
To increase the quality of bank regulatory capital to cover unexpected losses. Minimum
345
© 2014-2023 AnalystPrep.
To constrain banks’ borrowing rate (leverage) and, hence, avoid a build-up of debt which
To improve liquidity by requiring banks to hold liquid assets that can sufficiently run a bank
To limit procyclicality by requiring banks to hold sufficient retained earnings that can be
T he Basel III reforms announced in 2017 sought to complement the initial phase of the Basel III
reforms. In particular, they sought to restore credibility in the calculation of risk-weighted assets and
To improve the standardized approaches for credit risk, credit valuation, operation risk,
To restrict the use of internal evaluation models by placing limits on the inputs used to
To replace the existing Basel II output floor with a more robust risk-sensitive floor built
Credit risk is undoubtedly the biggest source of risk for banks. As such, it contributes towards a
346
© 2014-2023 AnalystPrep.
To further limit reliance on external credit ratings by introducing additional due diligence
requirements.
estate exposures that go beyond the widely applied flat risk weight.
To further break down retail exposures into smaller distinct components that can be
To align definitions with the internal ratings-based approach (IRB) by introducing a new
Under the IRB approach, banks are allowed to use their internal rating systems conditional on
T he advanced IRB approach (i.e., use their internal estimates of risk parameters such as
the probability of default (PD), exposure-at-default (EAD), and the loss-given-default (LGD).
T he foundation IRB approach (i.e., use only their internal estimates of PD).
T he main motivation behind changes to the credit risk IRB approach is to introduce similar capital
requirements that enhance comparability across banks and address lack of robustness in modeling
by non-financial collateral, such as haircuts to the collateral, have since increased. For
unsecured exposures, the LGD has been reduced from 45% to 40%.
Banks no l onger have the option to use the advanced IRB approach for certain asset
classes. T he advanced IRB approach allows banks to estimate PD and EAD in certain
scenarios, particularly when there are insufficient data to model exposure in precise
terms. Instead, they are now required to use the Foundation IRB approach, which
347
© 2014-2023 AnalystPrep.
introduces fixed values for the LGD and EAD.
Where the advanced IRB approach remains available, minimum input floors have been
In the most recent global financial crisis (2007/2008), banks suffered huge losses resulting from CVA
contracts. In the aftermath of the crisis, the committee enhanced the CVA framework. Objectives
include:
component of CVA risk as well as the risk of associated hedges into account.
To enhance the robustness of the CVA framework : CVA risk is complex. It captures
changes in counterparty credit spreads and other market risk factors. T he updated
guidelines remove the use of an internally modeled approach and instead emphasize the use
In addition, banks with minimal engagement activities in derivative transactions can use
their credit counterparty risk (CCR) capital requirements as a proxy for their CVA charge.
A bank whose centrally cleared derivatives are worth EUR 100 billion or less may
calculate its CVA capital charge as a simple multiplier of its counterparty CCR charge.
To i mprove the consi stency of the CVA framework : T he standardized and basic
approaches of the revised CVA framework have been revised to be consistent with the
348
© 2014-2023 AnalystPrep.
T he operational risk capital requirement can be summarised as follows:
T he business indicator is a set of marginal coefficients multiplied by the Bl based on three buckets
(where i = 1, 2, 3 denotes the buckets). As the business indicator expands (in € billion), the
coefficient applied becomes larger. T he buckets are summarized in the table below:
349
© 2014-2023 AnalystPrep.
Example 1: Calculating the BIC of a Bank With a BI of €50bn.
BI Bucket 1 2 3
BI Range ≤ €1 billion €1 billion < BI ≤ €30 billion ≥ €30 billion
Marginal BI Coefficient, ai 0.12 0.15 0.18
Calculationai €1 bn × 12 % = €0.12 bn €(30 – 1) × 15 % = €4.35 bn €(50 – 30) × 18 % = €3.6 b
350
© 2014-2023 AnalystPrep.
By summing the 3 buckets, we arrive at a BIC of €8.07 bi l l i on.
T he Internal Loss Multiplier is a function of the BIC and the Loss Component (LC), where the latter
is equal to 15 times a bank's average historical losses over the preceding 10 years. It is noteworthy
that firms with less than 20 years of data must use a minimum of 5 years of data.
T he leverage ratio is a complement to the risk-weighted capital requirement and prevents a situation
where banks take on unsustainable levels of debt. To mitigate the externalities or, rather, the ripple
effect associated with the failure of G-SIBs, the leverage ratio is set at 50% of a G-SIB’s risk-
Capital distribution constraints are imposed on entities that do not meet the leverage ratio
requirement.
T he Basel III reforms set a floor in capital requirements calculated under internal models at 72.5%
of those required under standardized approaches. Due to the potential impact of the floor, the 72.5%
requirement is expected to be implemented in phases over a five-year period from 2020 to 2026.
When calculating the floor, banks will be expected to take the following risks into account:
Credi t ri sk : Banks must use the revised standardized approach for credit risk. Notably,
associated counterparty credit risk using the standardized approach. T he resulting amounts
351
© 2014-2023 AnalystPrep.
Credi t val uati on adj ustment ri sk : Banks must use the standardized approach for CVA,
the basic approach, or 100% of the bank’s counterparty credit risk capital requirement.
Securi ti zati on framework : For assets in the banking book, the external ratings-based
Mark et ri sk : Banks will be required to use the standardized approach for market risk. A
risk weight of 1250% applies when computing the default risk charge component for any
Operati onal ri sk : Banks are required to use the standardized approach for operational
risk.
352
© 2014-2023 AnalystPrep.
Practice Question
Which of the following changes has Basel III set forth with reference to the changes in
credit risk?
II. Definitions within the internal ratings-based approach (IRB) have been aligned to
III. Retail exposures have been aggregated to simplify the analytical process.
credit rating.
C. II and III.
III is incorrect. A more granul ar treatment applies to retial exposures. T his is the
353
© 2014-2023 AnalystPrep.
Reading 122: Basel III: Finalising Post-Crisis Reforms
After compl eti ng thi s readi ng, the candi date shoul d be abl e to:
Explain the elements of the new standardized approach to measuring operational risk
capital, including the business indicator, internal loss multiplier, and loss component, and
calculate the operational risk capital requirement for a bank using this approach.
Compare the SMA to earlier methods of calculating operational risk capital, including the
Describe general and specific criteria recommended by the Basel Committee for the
Operational risk is defined as the risk of loss resulting from inadequate or failed internal processes,
people, and systems or from external events. It includes events such as fraud, employee errors,
criminal activity, and security breaches. However, this definition excl udes reputati onal ri sk as
well as strategi c ri sk .
Following the announcement of Basel III reforms, operational risk should be measured using the
standardi zed approach in internationally active banks. It is, indeed, noteworthy that supervisors
still have the discretion to apply the standardized approach among non-internationally active lenders.
ORC = BI C × I LM
354
© 2014-2023 AnalystPrep.
T he Business Indicator Component (BIC) is a product of the Business Indicator (BI) and a set of
(BI C) = ∑ (ai × BI i )
T he BI consists of select items from a bank’s financial statements that are representative of a bank’s
Important: All the three components must be calculated as averages over 3 years.
T he BIs of large international banks are typically large figures running into billions of Euros – the
chosen denomination for operational risk capital calculations. A bank’s BI tells a lot about its
operational risk exposure. For this reason, the standardized approach divides banks into 3 buckets
according to the size of their BI. Each bucket is associated with a regulatory-determined coefficient
ai. As the BI increases, so do the coefficients. A summary is set out in the table below:
355
© 2014-2023 AnalystPrep.
Example 1: Calculating the BIC of a Bank With a BI of €40bn.
BI Bucket 1 2 3
BI Range ≤ €1 billion €1 billion < BI ≤ €30 billion ≥ €30 billion
Marginal BI Coefficient, ai 0.12 0.15 0.18
Calculation, ai €1 bn × 12 % = €0.12 bn €(30 – 1) × 15 % = €4.35 bn €(40 – 30) × 18 % = €1.8 b
356
© 2014-2023 AnalystPrep.
By summing the 3 buckets, we arrive at a BIC of €6.27 bi l l i on.
BI Bucket 1 2 3
BI Range ≤ €1 billion €1 billion < ≤
BI € 30 billion ≥ € billion
30
Marginal BI Coefficient, ai 0.12 0.15 0.18
Calculation, ai 1
€ bn × 12 % = €0.12 bn €(25 – 1) × 15 %=€ 3.6 bn
0. 8
LC
I LM = ln [exp (1) − 1 + ( ) ]
BI C
As can be seen from the equation above, the ILM (the Internal Loss Multiplier) is a function of the
BIC and the Loss Component (LC), where the latter is equal to 15 times a bank's average historical
losses over the preceding 10 years. Firms with less than 10 years of data must use a minimum of 5
Important:
When LC = BIC, the ILM factor is equal to 1, and in effect, it becomes the default ILM
When the LC is greater than the BIC, the ILM is greater than one.
When the LC is less than the BIC, the ILM is less than one.
For firms with BI levels less than €1bn, the ILM is set to 1, and therefore, internal loss
Under transitional arrangements, banks without 5-year historical data must set the ILM to
1, although the supervisor in charge maintains the discretion to set higher levels.
357
© 2014-2023 AnalystPrep.
How is the Standardized Measurement Approach (SMA)
Different from the Advanced Measurement Approach (AMA)?
T he AMA, introduced in Basel II, allows for the estimation of regulatory capital to be based on a
diverse range of internal modeling practices conditional on supervisory approval. T he method has had
a significant degree of flexibility, meaning that banks have been at liberty to use slightly different
models while calculating the capital required. T his flexibility has resulted in the AMA being replaced
since it has resulted in widely incomparable internal modeling practices. T his has exacerbated
variability in risk-weighted asset calculations and eroded confidence in risk-weighted capital ratios.
T he SMA has limited flexibility and requires banks to follow precise guidelines in the entire capital
calculation process.
To use the Loss Component (LC), banks have to observe certain guidelines:
A bank must have quality loss data spanning a 10-year period. Under transitional
A bank must have documented procedures and processes for the identification, collection,
A bank must be ready to map internal loss data onto the relevant Level 1 supervisory
category as defined in Annex 9 of the Basel II Framework, on request (by the supervisor).
T he data must capture all material events and exposures from all geographical locations and
A bank must keep records of specific dates when operational risk events occurred or
commenced.
Operational loss events related to credit risk and that are accounted for in credit risk
358
© 2014-2023 AnalystPrep.
RWAs should not be included in the loss data set.
For the purpose of calculating minimum regulatory capital under the SMA framework,
operational risk losses related to market risk are treated as operational risks.
A bank must have an independent mechanism to validate and ensure the accuracy of the
data.
A bank must put policies that address various aspects of internal loss data in place. Among
others, such aspects include collection dates, gross loss definition, and the grouping of
losses.
Gross loss is a loss before recoveries of any type. Net loss is defined as the loss after
Banks must be able to identify gross operational losses and recoveries for all operational
risk events.
Provisions or reserves are accounted for in the P&L against the potential
Material “timing losses” resulting from operational risk events impacting the
359
© 2014-2023 AnalystPrep.
Internal or external business enhancement costs incurred following a risk event.
Insurance premiums
360
© 2014-2023 AnalystPrep.
Practice Question
A. €3.6 billion.
B.€2.4 billion.
C. €3 billion.
D. €2.97 billion.
Sol uti on
T he correct answer is D.
BI Bucket 1 2 3
BI Range ≤ €1 billion €1 billion < BI ≤ €30 billion ≥ €30 billion
Marginal BI Coefficient, ai 0.12 0.15 0.18
Calculation, ai €1 bn × 12 % = €0.12 bn €(20 – 1) × 15 % = €2.85 bn
361
© 2014-2023 AnalystPrep.