MoC Risk Parameters PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Margin of Conservatism Framework for IRB PD, LGD and CCF

Yang Liu1 1st Ver.: September 25, 2018


. This Ver.: July 12, 2019

Abstract The EBA Guidelines on PD and LGD estimation is due to apply from 1 January
2021, in which the banks are expected to have a framework in place as part of the risk rating
and reporting process to adjust and correct the uncertainties identified from deficiencies in
data, system and methodology. The ECB Guide on the TRIM in the meantime state that the
requirement of Margin of Conservatism (MoC) also applies for the CCF estimation. In this
paper, we develop and present a consistent framework to quantify the identified uncertainties
for the purpose of IRB risk parameter estimation.
Keywords: Advanced IRB, Long-run Default Rate, Long-run LGD, Central Default Tendency,
Risk Weighted Assets (RWA), Margin of Conservatism (MoC), Probability of Default (PD), Loss
Given Default (LGD), Credit Conversion Factor (CCF), Exposure at Default (EAD)

Contents

1 Category and Triggers of Identified Deficiencies 2

2 Design Concepts: a post-publication update 4

Part I. PD Estimation 12

3 Math Expression of PD Related Parameters 12

4 Appropriate Adjustment and Best Estimate for Default Rates 13


4.1 Single Category . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Multiple Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Generalized Form at Trigger Level Estimation . . . . . . . . . . . . . . . . . . . . 16

5 Category C Deficiencies: the General Estimation Error 17


5.1 Calibration Target and Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Error in Rank Ordering Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3 Error in Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Part II. LGD Estimation 19

1
Yang Liu is a quantitative specialist at an international bank. Yang holds a doctorate in quantitative finance
from Cass Business School, City University of London. He has published a number of papers on quantitative
methods in risk and finance and served as reviewer for journals in the field.
The opinions expressed in this paper are those of the author only.
E-mail: [email protected]

Electronic copy available at: https://ssrn.com/abstract=3258825


6 Mathematical Definition of LGD Related Parameters 19

7 Notation Update and Application for LGD Estimation 20

Part III. CCF Estimation 22

8 Math Expression of CCF 22

9 Notation Update and Application for CCF Estimation 24

Part IV. Margin of Conservatism Framework 25

10 Genral Form Results for PD, LGD and CCF Estimation 25

11 Final Margin of Conservatism 26

12 Conclusions 29

1 Category and Triggers of Identified Deficiencies

EBA Guidelines 1 stated the deficiencies in Paragraph 36 quoted below:

——————————————— quotation(start)

36. Institutions should identify all deficiencies related to the estimation of risk
parameters that lead to a bias in the quantification of those parameters or to an increased
uncertainty that is not fully captured by the general estimation error, and classify each
deficiency into one of the following categories:

1) Category A: Identified data and methodological deficiencies;

2) Category B: Relevant changes to underwriting standards, risk appetite, collection


and recovery policies and any other source of additional uncertainty.

———————————————– quotation(end)

To identify the deficiencies as mentioned in Paragraph 36, banks are required to review a list
of potential sources of additional uncertainty as a minimum requirement, as stated in
Paragraph 37:

——————————————— quotation(start)

37. For the purposes of identifying and classifying all deficiencies referred to in
paragraph 36 institutions should take into account all relevant deficiencies in methods,
processes, controls, data or IT systems that have been identified by the credit risk control

Electronic copy available at: https://ssrn.com/abstract=3258825


unit, validation function, internal audit function or any other internal or external review
and should analyse at least all of the following potential sources of additional uncertainty
in risk quantification:

1) under category A:

a) missing or materially changed default triggers in historical observations,


including changed criteria for recognition of materially past due credit
obligations;

b) missing or inaccurate date of default;

c) missing, inaccurate or outdated rating assignment used for assessing historical


grades or pools for the purpose of calculation of default rates or average realised
LGDs per grade or pool;

d) missing or inaccurate information on the source of cash flows;

e) missing, inaccurate or outdated data on risk drivers and rating criteria;

f ) missing or inaccurate information used for the estimation of future recoveries


as referred to in paragraph 159;

g) missing or inaccurate data for the calculation of economic loss;

h) limited representativeness of the historical observations due to the use of


external data;

i) potential bias stemming from the choice of the approach to calculating the
average of observed one year default rates in accordance with paragraph 80;

j) necessity of adjusting the average of observed one-year default rates in


accordance with paragraph 86;

k) missing information for the purpose of estimating loss rates or for the purpose
of reflecting economic downturn in LGD estimates;

2) under category B:

a) changes to underwriting standards, collection or recovery policies, risk appetite


or other relevant internal processes;

b) unjustified deviations in the ranges of values of the key risk characteristics of


the application portfolio compared with those of the dataset used for risk
quantification;

c) changes to market or legal environment;

d) forward-looking expectations regarding potential changes in the structure of the


portfolio or the level of risk, especially based on actions or decisions that have
already been taken but which are not reflected in the observed data.

———————————————– quotation(end)

Electronic copy available at: https://ssrn.com/abstract=3258825


Categorization of uncertainties is specified in Paragraph 42:

——————————————— quotation(start)

42. The final MoC on a risk parameter estimate should reflect the uncertainty of the
estimation in all of the following categories:

Category A: MoC related to data and methodological deficiencies identified under


category A as referred to in paragraph 36(a);

Category B: MoC related to relevant changes to underwriting standards, risk


appetite, collection and recovery policies and any other source of additional
uncertainty identified under category B as referred to in paragraph 36(b);

Category C: the general estimation error.

———————————————– quotation(end)

2 Design Concepts: a post-publication update

July 12, 2019 : In this chapter, we briefly outline some design concepts and key properties of
the presented framework. The purpose of this chapter, as an post publication update, is to aid
understanding of the framework and address some commonly misunderstood concepts. There
is no change in the quantification methodology between this update and the previously
published version.

Simple Counting and Observed Ratio

The risk parameters in scope are defined in the form of ratios of counting results. For example,
number of defaults vs. total number of observations, currency units of value lost vs. units of
value at risk, etc.

Often, such data are stored in different sources, or even groups of different sources each with a
different level of aggregation. The counting of units consisting of the numerator and
denominator, respectively, indicate the data deficiency in a transparent manner and so is the
quantification of the joint impact on the risk parameter.

In general, impact analysis of future market, economy or strategic changes on the risk
parameter, is challenging because different level of influence an institution has in the
numerator and denominator. For example in case of loss rate, while the lending institution
have more control over the level of total exposure, it is less likely to have such influence on the
amount of loss. Therefore, expert judgmental opinion on individual elements of the fraction
should be considered alongside numerical evidence for quantification of uncertainties.

The presented framework suggest and require identifying and counting of deficiencies by
combing through individual data records. The follow up quantification process starts with each

Electronic copy available at: https://ssrn.com/abstract=3258825


individual elements in the methodology of counting, aggregation and calculation of the risk
parameter.

Appropriate Adjustment

The Appropriate Adjustment is an important term for the quantification of the MoC.
Paragraph 38 of the Guideline states that:

——————————————— quotation(start)

38. In order to overcome biases in risk parameter estimates stemming from the
identified deficiencies referred to in paragraphs 36 and 37, institutions should apply
adequate methodologies to correct the identified deficiencies to the extent possible. The
impact of these methodologies on the risk parameter (‘appropriate adjustment’), which
should result in a more accurate estimate of the risk parameter (‘best estimate’),
represents either an increase or a decrease in the value of the risk parameter. Institutions
should ensure and provide evidence that the application of an appropriate adjustment
results in a best estimate.

———————————————– quotation(end)

Highlighted that, an adequate methodology is required to correct the identified deficiency;


post-correction, the impact of this methodology on the risk parameter, is defined as
‘appropriate adjustment’. This is an important concept because it is observed that even
professionals in the industry could have misunderstood the methodology of correction as the
appropriate adjustment itself.

For example, in case of missing value in the data records, practitioners may choose to replace
the missing value with mean or median of the actual observed data. This is a methodology to
correct the deficiency, however, the appropriate adjustment in this case, is the impact of this
methodology on the risk parameter. The post-adjustment expectation of this impact is that
the practitioner obtains a more accurate estimate of the risk parameter, the ‘best estimate’, by
correcting the missing data deficiency.

While the mean/median value replacement methodology is vital for the deficiency correction,
and should be documented precisely, it is a methodology of correction rather than the
‘appropriate adjustment’.

To reinforce the understanding described above, we refer to Paragraph 76 in the PD


estimation section of the Guideline:

——————————————— quotation(start)

76. For the purposes of paragraphs 73 to 75 an obligor has to be included in the


denominator and, where relevant, numerator, ... ... Institutions should analyse whether

Electronic copy available at: https://ssrn.com/abstract=3258825


such migrations or sales of credit obligations bias the default rate and, if so, they should
reflect this in an appropriate adjustment and consider an adequate MoC.

———————————————– quotation(end)

We see that an appropriate adjustment regarding the default rate is expected in this case, to
reflect the biasness, then a MoC considered accordingly. It is obvious that the methodology of
correction is less likely to be able to reflect its own post-implementation impact in default rate.

The presented quantification process focus on the risk parameter, and the appropriate
adjustment as well as the best estimate before arriving at the MoC. We do not discuss which
deficiency is best corrected by which methodology nor measure the effectiveness of these
methodologies in this paper.

By applying the methodology of correction, the institution corrects the identified deficiencies
to the extent possible. The impact on the risk parameter is then calculated as discussed in this
paper, and the presented analysis and quantification framework is adopted accordingly.

Conditional and Unconditional Observed Ratios

The observed risk parameters are calculated directly from the dataset available. For example,
the default rate DR2 , is the number of defaults D, divided by the number of total observations
N. This is defined as the unconditional observed default rate.

The conditional observed default rate, is the same fraction calculation above under a
pre-defined condition. Imagine a dataset of global customers of default and non-default records
that has a ‘region’ label flagging EU and non-EU. The unconditional observed default rate, is
calculated as D
N , using all records in the dataset. To isolate the observed impact of EU data
from non-EU data and report the default rate for regions outside the EU, one needs to first
exclude all EU records from the dataset, then calculate Dnon-EU /Nnon-EU . Defined on dataset D
as DR(D|non-EU), this is the conditional observed default rate where the condition is: non-EU.

We are specifically interested in the difference between DR(D) and DR(D|non-EU), this
difference is obviously caused by and only by the condition applied. Note that this is NOT to
say that the numerical difference is the impact of the condition, rather, one may wish to
consider that, the impact of the condition, from a default rate perspective, is completely
embedded in the comparison of the two ratios. Hence in a loose way, one may conclude that
the impact of “EU” region on the default rate subject to dataset D, is captured between
DR(D) and DR(D|non-EU).

Similarly, with a ‘deficiency trigger’ label instead of ‘region’ label in the above simple example,
one is able to calculate the default rates conditional on each and all of the deficiency triggers,
thus capture the impact of deficiencies using DR(D) and DR(D|non-deficient) conditioning on
each of the identified deficiencies.
2
Note that any mathematical notation used in this chapter is only for illustration purpose of the rational behind
the proposed framework, detailed definition of notations employed starts from next chapter onwards.

Electronic copy available at: https://ssrn.com/abstract=3258825


It is important to highlight respectively, that the independence of the methodology of
correction, as well as the choice of estimation model, from the cause of deficiency. For example,
the missing default date, the cause of this deficiency is probably human or system error, while
a potential methodology of correction is to use the last observation date as a proxy. Meantime,
the choice of statistical estimation model is not related to the cause nor the correction of this
deficiency.

Additional details to elaborate the focus on the conditional default rate as following: take a
simple 5-record dataset for example, the records are numbered only for discussion purposes.

Let us attempt a logistic regression model with the dataset. The dependent variable is the
“Default” indicated by 0 and 1. The independent variables are labeled as “Var 1”, “Var 2” and
“Var 3”, and we see “Var 1” has a missing value for record no.3 and “Var 2” has a missing
value for record no.4. As shown in Table 1:

Table 1: Raw 5-record dataset with missing value


Rec No. Default Var 1 Var 2 Var 3
1 0 a 12 0.2
2 1 b 34 0.4
3 1 ... 56 0.6
4 0 d ... 0.8
5 0 e 90 1

Having no methodology of correction for the identified missing value deficiency and passing the
dataset directly to any statistical software, one either gets an error for including the missing
value, or gets a model as the software by default implicitly remove the records affected by
missing value deficiency. Thus the real effective records is only records 1, 2 and 5, rather than
the whole set of 5 records. As a result, one could conclude that while the observed default rate
with deficiency in place is 2 out of 5, the implicit default rate modelled in the logistic model is
only 1 out of 3.

We consider this as the raw impact of the missing data deficiency because once a methodology
for correction is introduced, the uncertainty of missing value, is shadowed by the uncertainty of
the methodology of correction. As seen in the example above, the difference between 2/5 and
1/3 will only change with the observed missing value. However, the impact assessment after
introducing methodology of correction is likely to change alongside the choice of methodology.

The presented framework assume that the institution is able to clearly identify deficiencies in
own data and business environment. Methodologies for further aggregation and calculation of
the risk parameters are subject to relative sections of the guideline.

Explicit and Modular Quantification Process

As discussed above, it is anti-intuitive to assume that Category A and B deficiencies are results
of a statistical model, e.g. to assume that missing default date or market condition change is
caused by use of a particular statistical model. Meantime, it is obvious that the general
estimation error is model dependent, as one needs a model to get an estimate, but not

Electronic copy available at: https://ssrn.com/abstract=3258825


necessarily model specific, e.g. two different models could be both overestimating on the same
portfolio.

The figure below illustrate how the presented MoC quantification process integrate with the
model development process.

The modular design for the quantification process echo with the independence of trigger
assessment and help to avoid double counting. An independently defined MoC process flow is
efficient in terms of framework implementation, it also allows a straightforward integration to
the institutions existing monitoring and review processes.

The proposed framework works well with, but does not rely on specific model or distributional
assumptions, nor sampling and re-sampling methods. As the institution action according to the
remediation plan to fix the identified deficiencies, the impact is directly reflected in the updated
MoC results and thus the direction of change in MoC is inline with the mitigation progress.

The presented framework perform explicit MoC quantification on top of the risk parameter
estimation. The calculation is easily reversible from the MoC, the pure statistical model
estimate, all the way back to individual deficiency and each correction and adjustment
involved per deficiency.

Electronic copy available at: https://ssrn.com/abstract=3258825


Level of Calibration Dataset

Paragraph 43 of the Guideline states that the MoC quantification of the General Estimation
Error, is expected at the level of every calibration segment. Also, this MoC is expected to
reflect the dispersion of the distribution of the statistical estimator.

Further, Paragraphs 92 and 161, on the calibration of PD and LGD respectively, stated that
the calibration is expected to meet the long-run average of risk parameter at the level of grade
or pool, or the level of calibration segment.

In practice, it is possible that the segments used to calculate the observed risk parameter not
matching the calibration segment due to strategic or criteria change in data history. The
proposed counting and aggregation assessment support quantification according to segment
changes.

While the presented framework is capable to work with simulation or distribution based
approaches, it does not depend on specific assumptions hence avoid introducing additional bias
and uncertainty into the calibration segments. The feasibility of employing such approaches
should be tested separately on a per portfolio basis.

The presented framework support appropriate adjustment and MoC quantification at both
overall and segment level.

Purpose of the MoC

So far we are clear about the differences between ‘methodology of correction’, ‘appropriate
adjustment’ and the MoC.

As stated in Paragraph 50 of the EBA Guidelines 1 :

——————————————— quotation(start)

50. Institutions should regularly monitor the levels of the MoC. The adoption of a MoC
by institutions should not replace the need to address the causes of errors or
uncertainties, or to correct the models to ensure their full compliance with the
requirements of Regulation (EU) No 575/2013. Following an assessment of the
deficiencies or the sources of uncertainty, institutions should develop a plan to rectify the
data and methodological deficiencies as well as any other potential source of additional
uncertainty and reduce the estimation errors within a reasonable timeframe, taking into
consideration the materiality of the estimation error and the materiality of the rating
system.

———————————————– quotation(end)

Our understanding of Paragraph 50 is that the plan to rectify the data and methodological
deficiencies refers to a plan aiming to address the cause of deficiency, rather than simply
adding a margin.

Electronic copy available at: https://ssrn.com/abstract=3258825


While the appropriate adjustment and MoC quantify and compensate the impact on risk
parameter to extent, the MoC should not replace the need of a remediation plan for the
underlying deficiency.

It is clear that the fixing of individual deficiencies should be done by remediating the cause of
the deficiency. Same applies to the statistical models, in the meantime, we acknowledge that
models will eventually fail at some stage and rebuild is inevitable. It is clear that modelling
issues can only be addressed by modelling approach rather than using the MoC as a patch.

The presented framework does not serve as an approach to address the cause of deficiencies
and uncertainties, also it does not serve to correct model failures, a remediation plan as
described in the regulatory text should be sought to address the deficiency.

General Estimation Error

We assume that the statistical model development, together with the model calibration, is
compliant and meet the technical requirements before we consider the Category C uncertainty.
In particular, we highlight here that the estimation of the risk parameter is expected to be
compliant with the regulatory text as mentioned in Chapters 5 and 6 of the EBA Guidelines 1 .

Aside from self identified deficiencies in this category. The proposed forms of general
estimation error start with the following:

Rank Ordering Error:


We consider the following calibration dataset to illustrate the concept.
Sample ID Target Ranking Calibrated Ranking
sample1 1 1
sample2 2 3
sample3 3 2
sample4 3 3

Table 2: Example of Rank Ordering Error (data ordered by Target Ranking)

Denote sample1 by s1 and present ranking pair in parenthesis like (s1 , s2 ). Under the
Target Ranking order, the list of unique ranking pairs considered is: (s1 , s2 ), (s1 , s3 ),
(s1 , s4 ), (s2 , s3 ), (s2 , s4 ), (s3 , s4 ).

The Calibrated Ranking is concordant for pairs: (s1 , s2 ), (s1 , s3 ), (s1 , s4 ), while
discordant for pair (s2 , s3 ). Pair (s2 , s4 ) is a pair of tie in the Calibrated Ranking and
pair (s3 , s4 ) is a pair of tie in the Target Ranking.

Note that the term Calibrated Ranking is only used in the example to indicate that the
ranking is post-calibration, the rank ordering error could happen in both model
estimation or calibration stages, here we only focus on the rank of the final model output
to measure the uncertainty in rank ordering.

The error is quantified using the concordant/discordant pairs and the difference in the
post-calibration risk parameter of the discordant pairs. Detailed definition can be found
in Section 5.2, Section 7 and Section 9.

10

Electronic copy available at: https://ssrn.com/abstract=3258825


Calibration Error:
Here we start by using the arithmetic mean as the statistical estimator of central
tendency, and then calibrate the central tendency to meet the observed long-run average.
We consider the following questions regarding the calibration.

1) How was the target of the calibration achieved?


All of the calibrations in the figure below achieved the goal to meet the target value
of 3.

However, in Scenario 1, the calibration simply assign all calibration samples with
the target value. In Scenario 2, a specific symmetrical distribution is chosen, and in
Scenario 3, a general form of linear transform function is applied for each sample in
the calibration dataset.
Focus on Scenario 3 from the above figure, we see that the distribution is skewed
with a long right tail, and hence the next question.

2) Is the arithmetic mean a good statistical estimator of central tendency for the
calibration sample?
The answer is negative for a significantly skewed distribution. Often, the median is
preferred in this case. However, in this example, the calibration is already
performed using the estimator of mean as central tendency. One potential approach
to address this uncertainty is derived using the known property that, for a
distribution with finite variance, the absolute difference between mean and median
has an upper limit equal to the standard deviation of the distribution. Denote the
mean by µ and median by Med, Colin 6 has shown that:
p
|µ − Med| = |E(X − Med)| ≤ E(|X − Med|) ≤ E(|X − µ|) ≤ E [(X − µ)2 ] = σ

More details on the proposed assessment of the calibration error can be found in
Section 5.3, Section 7 and Section 9.

11

Electronic copy available at: https://ssrn.com/abstract=3258825


Property of The Presented Framework

In summary of the above mentioned basic concepts and building blocks, one is able to achieve
the following with the presented framework:

1) Deficiency and impact analysis on single record level.

2) Explicit MoC quantified for each individual model estimate.

3) Non-modelling update of the MoC does not require re-estimating the model.

4) Flexibility incorporating expert opinion, yet does not depend on assumptions such as
statistical distribution or choice of confidence interval.

5) Straightforward to remove the MoC from final risk parameter when the pure model
estimate is required for purposes such as model review or stress testing.

6) Retain consistency of MoC quantification methodology between different model build,


monitoring and review processes.

Part I. PD Estimation

3 Math Expression of PD Related Parameters

With consideration of Paragraph 73 to 78 as stated in Section 5.3.2 of the EBA Guidelines 1 ,


we quote paragraph 73 which outline the calculation of the 1-year default rate:

——————————————— quotation(start)

73. For the purpose of calculating the one-year default rate referred to in point (78) of
Article 4(1) of Regulation (EU) No 575/2013, institutions should ensure both of the
following:

1) that the denominator consists of the number of non-defaulted obligors with any
credit obligation observed at the beginning of the one-year observation period; in this
context a credit obligation refers to both of the following:

a) any on balance sheet item, including any amount of principal, interest and fees;

b) any off-balance sheet items, including guarantees issued by the institution as a


guarantor.

2) that the numerator includes all those obligors considered in the denominator that
had at least one default event during the one-year observation period.

———————————————– quotation(end)

12

Electronic copy available at: https://ssrn.com/abstract=3258825


Mathematically the definition of one-year Default Rate (DR) can be expressed as following:
Dy
DR1−year
y = (1)
Ny

where:

y: The yth observation year

DR1−year
y : The one-year DR for year y

Dy : Defaults occurred in year y, as defined in 73.1)

Ny : Non-defaulted obligors at the beginning of the year y, as defined in 73.2)

EBA Guidelines 1 Sections 5.3.3 and 5.3.4 cover the calculation of observed average default
rates and the long-run average default rate, where the long-run average default rate is observed
average default rate spanning cross the historical observation period. Hence, the calculation of
long-run average DR over Y years is expressed as:

long−run
DR1−year
1 + DR1−year
2 + . . . + DR1−year
y + ...
DR = (2)
Y
D1 D2 Dy Y
N + N2 + . . . + Ny + . . . 1 X Dy
= 1 = · , for y ∈ [1, Y]
Y Y Ny
y=1

where:

Y: Total number of observation years considered

DRlong−run : Long-run average DR

y, DR1−year
y , Dy and Ny : as defined above

4 Appropriate Adjustment and Best Estimate for Default Rates

Let tA ∈ (1, . . . , k) be the triggers in Category A as defined in Paragraph 42 covering all but
not limited to Category A triggers listed in Paragraph 37 where k is the total number of
triggers, denote the number of defaulted records affected by each of the triggers by dtA , and
the number of non-defaulted records affected by each trigger by mtA .

Omitting the year indicator i in this general form, we adjust Equation (1) to obtain the best
estimate of the 1-year default rate as following:

D + k dtA
P
1−year
DR
g = , for dtA ≥ 0, and mtA ≥ 0 (3)
N + k mtA
P

Updating Equation 2 we obtain the best estimate of the long-run default rate as:
Y
glong−run = 1 · g1−year ,
X
DR DR y for y ∈ [1, Y] (4)
Y
y=1

13

Electronic copy available at: https://ssrn.com/abstract=3258825


Aiming to report the deficiencies at the level it is observed, we focus on the 1-year default rate
for the rest of this section and omit the year indicator y.

4.1 Single Category

Proposition 1 Define the Category A Relative Uncertainty in Defaults as:


Pk t
tA dA
β = , for tA ∈ (1, . . . , k), and dtA ≥ 0
D
and the Category A Relative Uncertainty in Non-default Observations as:
Pk t
tA mA
α = , for tA ∈ (1, . . . , k), and mtA ≥ 0
N
The conservative add-on for identified deficiencies is:
tA tA
g1−year − DR1−year = DR1−year · β − α
DR (5)
tA
1 + αtA
The appropriate adjustment for 1-year average default rate specified by the relative
uncertainties αtA and β tA is:
1 + β tA
AAtA = (6)
1 + αtA
while the best estimate 1-year default rate is:
tA
g1−year 1−year tA 1−year 1 + β
DR tA = DR · AA = DR · (7)
1 + αtA

Proof. Denote k dtA = d and k mtA = m in this proof for simplicity, we calculate the
P P

difference between the raw estimate and the best estimate of the 1-year default rate DR1−year
as following:

D + k dtA
P
1−year 1−year D D+d D
DRtA
g − DR = Pk t − = −
N+ mA N N +m N
(D + d) · N − D · (N + m) d·N −D·m
= =
(N + m) · N N · (N + m)
 
d·N −D·m d·N D·m d
= · = 1− ·
d·N N · (N + m) d·N N +m
 
m d d
= 1− / · (8)
N D N +m

Substituting the β tA and the αtA quantities, Equation (8) can be further extended as:

αtA β tA · D D β tA − αtA
 
1−year 1−year
DRtA
g − DR = 1− t · = ·
βA N + αtA · N N 1 + αtA
β tA − αtA
= DR1−year · (9)
1 + αtA

14

Electronic copy available at: https://ssrn.com/abstract=3258825


Eq (5) q.e.d.

As a result, the uncertainty adjusted 1-year default rate is obtained:


tA − αtA β tA − αtA
 
1−year 1−year β 1−year 1−year
DRtA
g = DR · + DR = DR · 1+
1 + αtA 1 + αtA
1+β A t
= DR1−year · = DR1−year · AAtA (10)
1 + αtA
Eq (6) and (7) q.e.d.

With interchangeable notations, it is obvious that Proposition (1) holds for Category B with
triggers denoted as tB , where tB ∈ (k + 1, . . . , s) is the triggers in Category B as defined in
Paragraph 42 covering all but not limited to Category B triggers listed in Paragraph 37 where
k is the total number of triggers in Category A and s − k is the total number of triggers in
Category B.

4.2 Multiple Categories

The methodology for appropriate adjustment and the best estimate for any additional
categories must consider the impact on the deficiencies identified already in the estimation of
risk parameter, the best estimate of 1-year default rate with single Category adjustment in this
case.

Proposition 2 In extension to definitions in Proposition (1), define the Category B Relative


Uncertainty in Defaults as:
Ps
dtB
β = k+1
tB
, for tB ∈ (k + 1, . . . , s), and dtB ≥ 0
D
and the Category B Relative Uncertainty in Observations as:
Ps
mtB
α = k+1
tB
, for tB ∈ (k + 1, . . . , s), and mtB ≥ 0
N
The conservative add-on for identified deficiencies is:
1 + β tA + β tB 1 + β tA
 
1−year 1−year 1−year
DRtA,B − DRtA
g g = DR · − (11)
1 + αtA + αtB 1 + αtA
The appropriate adjustment for 1-year average default rate specified by the relative
uncertainties αtA , β tA , αtB and β tB is:
1 + β tA + β tB
AAtA,B = (12)
1 + αtA + αtB
while the best estimate 1-year default rate is:
tA + β tB
g1−year 1−year tA,B 1−year 1 + β
DR tA,B = DR · AA = DR · (13)
1 + αtA + αtB
Here footnote tA indicate the presence of Category A triggers and tB indicate the presence of
Category B triggers, similarly, footnote tA,B indicate the presence of both Category A and
Category B triggers.

15

Electronic copy available at: https://ssrn.com/abstract=3258825


Ps tB
Ps tB
k+1 d k+1 m
Proof. Denote β ∗ = and α∗ = . Consider the uncertainty adjustment for
D+ k1 dtA N + k1 mtA
P P

g1−year , left-hand side of


the best estimate with Category A uncertainty already in place, DR tA
Equation (13) can be written as following:
Ps tB
k+1 d
1+ β∗ 1+ β tA 1+
D+ k1 dtA
g1−year g1−year
P
DRtA,B = DR tA · ∗
= DR1−year · · Ps tB
1+α 1 + αtA 1 + k+1 m
Pk t
N+ 1 m A
Ps
k+1 dtB
1+ β tA 1+ D·(1+β tA ) 1 + β tA + β tB
= DR1−year · · Ps tB = DR1−year · (14)
1 + αtA 1 + k+1 m 1 + αtA + αtB
N ·(1+αtA )

Eq (12) and (13) q.e.d.

Equation (11) is easily obtained by substituting Equation (13) and (7).

4.3 Generalized Form at Trigger Level Estimation

Proposition 3 Rearrange the notation for Category triggers: tA ∈ (1, . . . , k) and


tB ∈ (k + 1, . . . , s) with t ∈ [1, s] where s is the total number of triggers.

With updated notation, the Relative Uncertainty in Defaults for all triggers is:
s Ps t
X
t d
β = , for t ∈ [1, s], and dt ≥ 0
D
and the Relative Uncertainty in Non-default Observations is:
s Ps t
X
t m
α = , for t ∈ [1, s], and mt ≥ 0
N

The appropriate adjustment for 1-year average default rate considering uncertainty in all
identified deficiency triggers, can be expressed as:
s s
1 + s βt
P X X
AA = Ps t , for t ∈ [1, s], β t , αt , βt, αt > −1 (15)
1+ α
Hence the best estimate 1-year default rate, considering uncertainty in all identified deficiency
triggers, is:
Ps t s s
1−year 1−year 1 + P β
X X
1−year t t t
DR
g = DR ·AA = DR · , for t ∈ [1, s], β , α , β, αt > −1 (16)
1 + s αt

While the uncertainty in each identified deficiency is defined as β t and αt , relative to D and N
respectively. Note that condition β t , αt > −1 is a logical condition rather than a mathematical
one, as β t = −1 or αt = −1 means one does not trust the entirety of the default or non-default
observations.

Proof.

16

Electronic copy available at: https://ssrn.com/abstract=3258825


• Equation (15) is obtained by re-writing Equation (12) with trigger level notations.

• Equation (16) is obtained by re-writing Equation (13) with trigger level notations.

Eq (15) and (16) q.e.d.

5 Category C Deficiencies: the General Estimation Error

Section 5.3.5 of the EBA Guidelines 1 detail requirements of the calibration, and set the
calibration target to the long-run average default rate. Paragraphs 89 and 90 clarified that the
calibration is to be conducted before application of MoC or floor and master scale mapping
may be performed during the calibration. In Paragraph 91, Article 169(3) or Regulation (EU)
No 575/2013 is referenced, assuming a continuous rating scale for in case direct use of
estimated risk parameter.

In this paper, we explore the common approach of calibrating to a pre-defined master scale.
Analysis presented in here can be altered to cover other types of calibration as mentioned in
Paragraph 91.

5.1 Calibration Target and Output

Targeting the long-run average default rate, before application of MoC, we can express the
targeting function mathematically as following:
n̂1 · p1 + . . . + n̂b · pb + . . . + n̂B · pB
= DRlong−run (17)
n
n: size of calibration sample

b, B: B is the total number of master scale grades/ranks, b ∈ [1, B] indicate the bth
grade/rank

pb : the probability of default assigned for the bth master scale grade/rank

n̂b : calibrated population for the bth master scale grade/rank, B n̂b = n
P

DRlong−run : long-run average default rate

To avoid influence the rank ordering of obligors in the calibration sample, we consider only the
estimated and calibrated results of the calibration exercise. Table 3 conceptually illustrate the
calibrated result comparing to the calibration target and highlight the types of uncertainty
which we will focus on for the rest of the section.

For the purpose of the MoC framework, we propose to classify the general estimation error in 2
types:

c1 : error in rank order estimation, c1 denote quantified measure of this type of error;

c2 : error in calibration, c2 denote quantified measure of this type of error;;

17

Electronic copy available at: https://ssrn.com/abstract=3258825


Table 3: Illustration of calibrated results vs. the sample target

Target PD Calibrated PD
sample1 ps1 p̂s1
c2
sample2 ps2 p̂s2

··· ··· ··· c1

samplen−1 psn−1 p̂sn−1

samplen psn p̂sn

5.2 Error in Rank Ordering Estimation

Proposition 4 The error in rank order estimation is quantified by PD difference of the


discord ranks:
P dis
2 · n |δ p̂|
c1 = q P P (18)
[n(n − 1) − i ti (ti − 1)][n(n − 1) − j uj (uj − 1)]

δ p̂: the difference of calibrated PD between the discordant pair, e.g. p̂sn − p̂s1 in
Table (3) if the rank ordering is discordant for the pair;

n: size of portfolio

ndis : number of discordant pairs;

ti : number of tied values in the ith group of ties for the target rank

uj : number of tied values in the j th group of ties for the calibrated rank

Inspired by the well-known Kendall’s τ correlation coefficient, we propose the calculation in


the form of Equation (18). This is simply the total PD difference affected by the rank ordering
error weighted by the total number of non-tied observation pairs.

Detailed definition of discordant pairs and the count of ties can be found in literature
introducing the Kendall’s τ , hence not discussed here. Note that the constant 2 in the
nominator is sometimes included as part of calculation in the denominator depending on
choice of implementation.

5.3 Error in Calibration

Proposition 5 The estimation error considered here is the dispersion of the calibrated mean
PD: v
u " B #
u1 X
2 long−run 2
c2 = t · ωb · pb − (DR ) (19)
n
1

18

Electronic copy available at: https://ssrn.com/abstract=3258825


ωb is the population weight after calibration for the bth grade/rank of the master scale.

Proof. The quantified dispersion, c2 , is the standard deviation of the calibrated mean PD,
for a successful calibration supported by quantitative and qualitative tests as mentioned in
Paragraph 87, the calibrated mean PD is expected to be equal to the calibration target,
DRlong−run .

Here we briefly explain the rational of this proposal. Recall the target function of the
calibration exercise as shown in Equation (17):
n̂1 · p1 + . . . + n̂b · pb + . . . + n̂B · pB
= DRlong−run
n
Simplify the notation of DRlong−run as T only for this proof, we can re-write the above
equation as:
n̂1 · (p1 − T ) + . . . + n̂b · (pb − T ) + . . . + n̂B · (pB − T )
=0 (20)
n
Denote variables xb = pb − T , and given that ωb = n̂b /n for b ∈ [1, B], the left-hand side of
Equation (20) is simply the weighted average of variable xb : B ωb xb . We can calculate the
P

standard deviation of the weighted mean as following, note that x̄ = B ωb xb = 0:


P

s s v
B B u " B #
X X x b − x̄ u1 X
σ= ωb2 σb2 = ωb2 ( )2 = t · ωb · p2b − T 2 (21)
n̂b n
1

Eq (19) q.e.d.
long−run
Therefore, the mean of the calibrated PD lies in the range DR ± σ.

Part II. LGD Estimation

6 Mathematical Definition of LGD Related Parameters

The calculation of economic loss and realised LGD is defined in Section 6.3.1 of the EBA
Guidelines 1 .

——————————————— quotation(start)

131. For the purpose of LGD estimation as referred to in Article 181(1)(a) of


Regulation (EU) No 575/2013, institutions should calculate realised LGDs for each
exposure, as referred to in point (55) of Article 4(1) of that Regulation, as a ratio of the
economic loss to the outstanding amount of the credit obligation at the moment of
default, including any amount of principal, interest or fee.

———————————————– quotation(end)

19

Electronic copy available at: https://ssrn.com/abstract=3258825


Hence the realised LGD is:
Li
rLGDi = (22)
Ei
where:

i: The ith exposure

rLGDi : the realised LGD for exposure i

Li : economical loss for exposure i, as defined in Paragraph 132 and 133

Ei : outstanding amount at time of default, as defined in Paragraph 134 and 135

Section 6.3.2.2, and in particular Paragraph 150 define the calculation of long-run average
LGD.

——————————————— quotation(start)

150. Without prejudice to Article 181(2) of Regulation (EU) No 575/2013 institutions


should calculate the long-run average LGD as an arithmetic average of realised LGDs
over a historical observation period weighted by a number of defaults. Institutions should
not use for that purpose any averages of LGDs calculated on a subset of observations, in
particular any yearly average LGDs, unless they use this method to reflect higher weights
of more recent data on retail exposures in accordance with Article 181(2) of Regulation
(EU) No 575/2013.

———————————————– quotation(end)

The long-run average LGD is obtained by:


rLGD1 + . . . + rLGDi + rLGDM
LGDlong−run = (23)
M
where:

LGDlong−run : the long-run average LGD for the historical observation period

M, i: the total number of defaults in the scope of LGD estimation during the historical
observation period, i ∈ [1, M ]

7 Notation Update and Application for LGD Estimation

Category A and B Deficiencies

Omitting the exposure indicator i for simplicity, with rLGD


^ as the best estimates for the
realised LGD, we have:
t
^ = L+l
rLGD (24)
E + et

20

Electronic copy available at: https://ssrn.com/abstract=3258825


where lt and et stands for the uncertain economical loss and outstanding exposure caused by
trigger t as listed in Paragraph 37. We can further define the relative uncertainty in loss as
β t = lt /L and relative uncertainty in exposure αt = et /E.

Recall Equation (15) and (16) with updated definition of notation for LGD estimation, we use
a footnote to highlight the application for LGD estimation only in this instance:
s s
1 + s βt
P X X
AArLGD = Ps t , for t ∈ [1, s], β t , αt , βt, αt > −1
1+ α

s s
1 + s βt
P X X
t t t
rLGD = rLGD · AArLGD = rLGD ·
^ , for t ∈ [1, s], β , α , β, αt > −1
1 + s αt
P

Hence, we found that the general form of appropriate adjustment and best estimate for
identified deficiencies in Category A and B applicable for LGD estimation.

Category C Deficiencies

With minimum update to Table 3, we illustrate the Category C uncertainty of LGD estimation
in Table 4.

Table 4: Illustration of calibrated results vs. the sample target for LGD estimation

Target LGD Calibrated LGD


s1
sample1 rLGDs1 LGD
[
c2
s2 s2
sample2 rLGD LGD
[

··· ··· ··· c1


sn−1
samplen−1 rLGDsn−1 LGD
[
sn
samplen rLGDsn LGD
[

We found Equation (18) applies for LGD estimation using difference of LGD in the discord
ranks instead of PD.
P dis c
LGD 2 · n |δ lgd|
c1 = q P P (25)
[n(n − 1) − i ti (ti − 1)][n(n − 1) − j uj (uj − 1)]
sn s1
Here δ lgd
c is the difference of calibrated LGD between the discordant pair, e.g. LGD
[ − LGD
[
in Table 4 if the rank ordering is discordant for the pair.

With consideration of the requirements as stated in Section 6.3.2.2, the target function for the
calibration exercise for LGD is expressed as following:

LGD
[ s1 + . . . + LGD
[ sj + . . . + LGD
[ sn
= LGDlong−run (26)
n

21

Electronic copy available at: https://ssrn.com/abstract=3258825


where LGD
[ sj is the LGD estimated and calibrated for the jth calibration sample.

The target function of calibration, Equation (26), is the size n equal weighted average of the
calibrated LGD. Hence the standard deviation for the sample mean µ̂ after calibration can be
obtained by: v v
n
u n
 2
σLGD
uX uX
LGD
u 2 2 1 [
c2 = σµ̂ = t ωj σ [ = σLGD[ ·
t = √ (27)
LGD n n
j=1 j=1

Part III. CCF Estimation

8 Math Expression of CCF

The requirement of MoC for Credit Conversion Factor (CCF) is specified in the ECB TRIM
Guide 2 Paragraph 100:

——————————————— quotation(start)

100. Institutions are expected to have in place a MoC framework in line with the EBA
CP on GLs 23 to 35. This principle is also applicable to the estimation of CCFs.

———————————————– quotation(end)

To clarify, all of the “Paragraphs” referenced in this section refers to the regulatory text as
mentioned in the ECB TRIM Guide 2 document by default, unless specified otherwise.

Realised CCF

The estimation approach of CCF is detailed in Paragraph 92 and 94:

——————————————— quotation(start)

92. Realised conversion factors should be calculated at facility level.

94. The EAD for undrawn commitments is calculated as the committed but undrawn
amount multiplied by a CCF. CCFs can also be derived from direct estimates of total
facility EAD.

———————————————– quotation(end)

Thus, the realized CCF from empirical data is calculated as:


EADundrawn
rCCFr = (28)
Cundrawn
r
where:

22

Electronic copy available at: https://ssrn.com/abstract=3258825


EADundrawn : the EAD for undrawn commitments, at the time of default

r: reference date r for the realised CCF calculation

Cundrawn
r : committed but undrawn amount on reference date r, here this is equal to the
total committed limit Limitr take away the utilized drawn amount, i.e.
Cundrawn
r = Limitr − Drawnr

rCCFr : realized CCF calculated for reference date r

The above expression is in agreement with the widely known Loan Equivalent parameter
(LEQ) in which the total EAD is defined as the drawn amount plus the undrawn amount
multiplied by an LEQ factor, Qi 5 defined this approach as following:

EAD = Drawnr + EADundrawn = Drawnr + LEQ · (Limitr − Drawnr )

Hence, the LEQ factor can be obtained as:

EAD − Drawnr EADundrawn


LEQ = = = rCCFr (29)
Limitr − Drawnr Cundrawn
r

Meantime, the CCF as defined in alternative estimation approaches might come in different
forms other than Equation (28) and (29) above. For example, Moral 3 suggested the CCF as
ratio of EAD and the total committed amount Limitr :
EAD
CCFr =
Limitr

while Jacobs 4 defined the CCF as EAD weighted by the drawn amount Drawnr at reference
time r:
EAD
CCFr =
Drawnr

Long-run Average CCF

The reference date and estimation approach of the long-run average could differ according to
the choice of reference date of risk drivers. As mentioned in Paragraph 97:

——————————————— quotation(start)

97. Institutions should analyse the risk drivers not only at twelve months prior to
default (the fixed horizon approach) but also within the year before default (the cohort
approach). When choosing the appropriate reference date for a risk driver, institutions
should take into account its volatility over time.

———————————————– quotation(end)

23

Electronic copy available at: https://ssrn.com/abstract=3258825


The reference date for the fixed horizon approach is exactly 12 months prior to default. The
estimated long-run average CCF for this approach is the average of the realised facility level
CCFs in the historical observation period.
rCCF1 + . . . + rCCFi + rCCFM
CCFlong−run
fth = (30)
M
where:

CCFlong−run
fth : the fixed time horizon approach long-run average CCF for the historical
observation period

M, i: the total number of defaulted facilities in the scope of CCF estimation during the
historical observation period, i ∈ [1, M ]

The reference date for the cohort approach is the first day of the cohort year for all defaults
observed in the following 12 months. In this case the long-run average CCF is the annual
average CCF weighted by the number of years in the historical observation period.

rCCF1 + . . . + rCCFy + rCCFY


CCFlong−run
cohort = (31)
Y

CCFlong−run
cohort : the cohort approach long-run average CCF for the historical observation
period

rCCFy : the cohort approach annual average CCF for the year y

Y, y: the total number of cohort years Y in the historical observation period, y ∈ [1, Y ]

9 Notation Update and Application for CCF Estimation

Category A and B Deficiencies

As shown above, we found that the CCF estimation approach defined in the Paragraph 94, as
well as some of the common accepted CCF estimation approaches in literature all have the
CCF factor defined in the form of a ratio.

Hence, define β t as the relative uncertainty in the nominator of the CCF calculation, i.e.
EAD or EADundrawn , and define αt as the relative uncertainty in the denominator, i.e.
Cundrawn
r , Limitr or Drawnr .

Recall Equation (15) and (16) with updated definition of notation for CCF estimation, we use
a footnote to highlight the application for CCF estimation only in this instance:
s s
1 + s βt
P X X
t t t
AArCCF = , for t ∈ [1, s], β , α , β, αt > −1
1 + s αt
P

s s
1 + s βt
P X X
t t t
rCCF = rCCF · AArCCF = rCCF ·
^ , for t ∈ [1, s], β , α , β, αt > −1
1 + s αt
P

24

Electronic copy available at: https://ssrn.com/abstract=3258825


Hence, we found that the general form of appropriate adjustment and best estimate for
identified deficiencies in Category A and B applicable for the CCF estimation.

Category C Deficiencies

Similar to the previous illustration shown in Tables 3 and 4, with updated notations
Equations (18) applies for CCF estimation.
Pndis
2· |δ ccf
d|
CCF
c1 =q P P (32)
[n(n − 1) − i ti (ti − 1)][n(n − 1) − j uj (uj − 1)]

Here δ ccf
d is the difference of calibrated CCF parameter between the discordant pair.

The target function for the calibration exercise for CCF is comparable with that of the LGD
estimation:
[ s1 + . . . + CCF
CCF [ sj + . . . + CCF
[ sn
= CCFlong−run (33)
n
where CCF
[ sj is the CCF estimated and calibrated for the jth calibration sample.

The standard deviation for the sample mean µ̂ after calibration can be obtained by:
v v
n
u n
 2
σCCF
uX uX
CCF
u 2 2 1 [
c2 = σµ̂ = t ωj σ [ = σCCF[ ·t = √ (34)
CCF n n
j=1 j=1

Part IV. Margin of Conservatism Framework

10 Genral Form Results for PD, LGD and CCF Estimation

For the sake of generalization, we use RP to denote the Risk Parameter in either PD, LGD or
CCF estimation, where RP f is its best estimate and RP
c is the calibrated result.

Appropriate Adjustment
s s
1 + s βt
P X X
AA = , for t ∈ [1, s], and β t , αt , βt, αt > −1 (35)
1 + s αt
P

Best Estimate
s s
1 + s βt
P X X
RP = RP · AA = RP ·
f , for t ∈ [1, s], and β t , αt , βt, αt > −1 (36)
1 + s αt
P

25

Electronic copy available at: https://ssrn.com/abstract=3258825


Category C: general estimation error
Pndis
2· |δ RP|
c
c1 = q P P (37)
[n(n − 1) − i ti (ti − 1)][n(n − 1) − j uj (uj − 1)]

v
u B
uX
c2 = σµ̂ = t ωb2 σ 2c , B and b for both pre-defined and continuous grades (38)
RP
b=1

11 Final Margin of Conservatism

MoC for Category A and B: MoCA,B

We start by calculating the value of appropriate adjustment per trigger level for τ ∈ [1, s] and
t ∈ [1, τ ]:

1+Ps β t 1+ s−1 β t
 P P


 δ f s
RP| s−1 : RP · ( 1+ s
α t − 1+ s−1 αt
P ) = RP · (AAs − AAs−1 ) = RP · δAA|ss−1

··· : ···




1+Pτ β t 1+ τ −1 β t
P P
δ f τ
RP| τ −1 : RP · ( τ t − Pτ −1 t ) = RP · (AAτ − AAτ −1 ) = RP · δAA|ττ −1
 1+ α 1+ α

· · · : · · ·




f 1 : RP · ( 1+β tt − 1+0 ) = RP · (AA1 − AA0 ) = RP · δAA|1

δ RP|

0 1+α 1+0 0

f τ ≥ 0 condition on δAA|τ ≥ 0.
We see that δ RP|τ −1 τ −1

Denote δAA+ for δAA ∈ max(0, δAA), and δAA− for δAA < 0. Therefore:

f + if δAA+
(
δ RP
f − if δAA−
δ RP

Hence we propose the value increase because of Category A and B uncertainties in the
following 2 parts:

- the quantity increased due to identified deficiencies:


This type of additional increase is handled by the appropriate adjustment used to best
estimate the risk parameter, i.e. RP · δAA+ , therefore already embedded in the RP.
P f

- the quantity that was not decreased as a conservative choice of the bank:
This is equal to all of the decrease because of the appropriate adjustment applied, i.e.
RP · δAA− .
P

Hence the final MoC for Category A and B is:


X
MoCA,B = −RP · δAA− (39)

26

Electronic copy available at: https://ssrn.com/abstract=3258825


MoC for Category C: MoCC

As discussed in previous section, the Category C uncertainties are measured by c1 and c2 .
Meantime, as shown in Tables (3) and (4), the calibration/estimation error (c2 ) is within the
sample-vs-calibrated pair and the rank ordering error (c1 ) is across sample pairs, therefore
independent from each other.

Hence the final MoC for Category C is:

MoCC = c1 + c2 (40)

Note that c1 and c2 estimated here are subject to the realised target risk parameter.

Final MoC

We quote here Paragraph 44, 45 and 46 regarding the final MoC:

——————————————— quotation(start)

44. For the purpose of paragraph 43(a) and for each of the categories A and B,
institutions may group all or selected deficiencies, where justified, for the purpose of
quantifying MoC.

45. Institutions should quantify the final MoC as the sum of:

1) the MoC under category A as referred to in paragraph 43(a);

2) the MoC under category B as referred to in paragraph 43(a);

3) the MoC for the general estimation error (category C) as referred to in paragraph
43(b).

46. Institutions should add the final MoC to the best estimate of the risk parameter.

———————————————– quotation(end)

Note that so far we have the MoCA,B on individual realised risk parameter level, e.g. 1-year
default rate, single exposure LGD, or single facility CCF. Meantime, the MoCC is subject to
the target long-run average risk parameter.

In order to obtain the final MoC which can be used as margin of conservatism for future
estimated risk parameters, we first calculate the best estimated long-run risk parameter as
following:

long−run f 1 + RP
RP f 2 + . . . + RP
f LRO
RP
f =
LRO
RP1 · AA1 + RP2 · AA2 + . . . + RPLRO · AALRO
= (41)
LRO

27

Electronic copy available at: https://ssrn.com/abstract=3258825


Here LRO is the “Long-Run Observations”, in case for long-run default rate, this is number of
observation years in the historical observation period; for long-run LGD or CCF calculation,
this is the number of realised LGD or CCFs.

Similar to Equation (7) we define the best estimate long-run average risk parameter as the raw
long-run average risk parameter multiplied by the long-run appropriate adjustment:

long−run long−run
f long−run
RP
long−run long−run
RP
f = RP · AA ⇒ AAA,B = (42)
RPlong−run

Note that calculation using Equation (42) can be performed recursively through the list of
uncertainty triggers to obtain trigger level long-run appropriate adjustment for each identified
deficiency.

Next, we calculate the best estimate long-run risk parameter with individual MoC:

long−run f 1 + MoC1 ) + (RP


(RP f 2 + MoC2 ) + . . . + (RPf LRO + MoCLRO )
A,B A,B A,B
RP
f MoC =
LRO
(RP1 · AA1 + MoC1A,B ) + . . . + (RPLRO · AALRO + MoCLRO
A,B )
=
" LRO #
long−run MoCA,B + . . . + MoCLRO
1
A,B
= RP
f + (43)
LRO

Note here we arrange Equation (43) as is because it is imporant to highlight here that the
long-run MoC for Categories A and B is NOT to be express as the second term in the last row
of equation.
MoC1A,B + . . . + MoCLRO
A,B
MoClong−run
A,B 6
= (44)
LRO

Recall that we define the margin of conservatism in this framework as the trigger level negative
adjustment which was not decreased. Hence the long-run MoC for Category A and B, subject
to the raw estimate historical risk parameter RPlong−run as following is:
s h
long−run −
i
MoClong−run
X
A,B =− δ RP
f , for triggers t ∈ [1, s] (45)
t=1

The relative final MoC for Categories A and B, RMoClong−run


A,B , is calculated as:

long−run
long−run MoCA,B
RMoCA,B = (46)
RPlong−run

Now we calculate the MoC for Category C, also relative to the raw estimate historical risk
parameter RPlong−run as following:

MoClong−run c1 + c2


RMoClong−run
C = C
long−run
= (47)
RP RPlong−run

28

Electronic copy available at: https://ssrn.com/abstract=3258825


With Equations (42), (46) and (47), the model estimated risk parameter mRP, with
appropriate adjustment and final MoC, denoted as mRP
] MoC , can be calculated as:
 
] MoC = mRP · AAlong−run + mRP · RMoClong−run + RMoClong−run
mRP (48)
A,B A,B C

12 Conclusions

In this paper, we follow the EBA documents regarding the guidelines that apply from 1
January 2021 and propose a framework to quantify, document and monitor the impact of
uncertainties relevant to the IRB PD, LGD and CCF estimation. Following the categorization
of deficiency types, we derived a general form methodology of appropriate adjustment, best
estimate and final MoC that is intuitive, flexible and transparent to the institution.

The framework comes with mathematical properties that is complaint with the guideline
requirements not only in MoC related sections of the EBA Guidelines 1 document, but also
sections related to PD, LGD and CCF estimation and calibration.

When used as a measure of impact and track the effect of remediation actions on the deficiency
triggers, the modularized quantification approach allows to track changes per trigger therefore
allow timely monitoring of the uncertainty. Meanwhile, the framework support processing each
deficiency indicator sequentially in implementation, this allows capture and quantify of any
changes observed in the system at desired frequency for either reporting or modelling purposes.

References
1 EBA Guidelines. “Guidelines on PD estimation, LGD estimation and the treatment of
defaulted exposures”. European Banking Authority, 23 April, 2018.

2 ECB TRIM Guide. “Guide for the Targeted Review of Internal Models (TRIM)”.
European Central Bank, February, 2017.

3 G. Moral. “EAD Estimates for Facilities with Explicit Limits”. in Engelmann, B. and
Rauhmeier, R., eds. The Basel II Risk Parameters: Estimation, Validation, and Stress
Testing, New York: Springer, pages 197-242

4 M. Jacobs Jr. “An Empirical Study of Exposure at Default”. Office of the Comptroller of
the Currency, U.S. Department of the Treasury

5 Min Qi. “Exposure at default of unsecured credit cards”. Office of the Comptroller of the
Currency, U.S. Department of the Treasury

6 Colin Mallows. “Another comment on O’Cinneide”. The American Statistician. 45 (3): 257

29

Electronic copy available at: https://ssrn.com/abstract=3258825

You might also like