MoC Risk Parameters PDF
MoC Risk Parameters PDF
MoC Risk Parameters PDF
Abstract The EBA Guidelines on PD and LGD estimation is due to apply from 1 January
2021, in which the banks are expected to have a framework in place as part of the risk rating
and reporting process to adjust and correct the uncertainties identified from deficiencies in
data, system and methodology. The ECB Guide on the TRIM in the meantime state that the
requirement of Margin of Conservatism (MoC) also applies for the CCF estimation. In this
paper, we develop and present a consistent framework to quantify the identified uncertainties
for the purpose of IRB risk parameter estimation.
Keywords: Advanced IRB, Long-run Default Rate, Long-run LGD, Central Default Tendency,
Risk Weighted Assets (RWA), Margin of Conservatism (MoC), Probability of Default (PD), Loss
Given Default (LGD), Credit Conversion Factor (CCF), Exposure at Default (EAD)
Contents
Part I. PD Estimation 12
1
Yang Liu is a quantitative specialist at an international bank. Yang holds a doctorate in quantitative finance
from Cass Business School, City University of London. He has published a number of papers on quantitative
methods in risk and finance and served as reviewer for journals in the field.
The opinions expressed in this paper are those of the author only.
E-mail: [email protected]
12 Conclusions 29
——————————————— quotation(start)
36. Institutions should identify all deficiencies related to the estimation of risk
parameters that lead to a bias in the quantification of those parameters or to an increased
uncertainty that is not fully captured by the general estimation error, and classify each
deficiency into one of the following categories:
———————————————– quotation(end)
To identify the deficiencies as mentioned in Paragraph 36, banks are required to review a list
of potential sources of additional uncertainty as a minimum requirement, as stated in
Paragraph 37:
——————————————— quotation(start)
37. For the purposes of identifying and classifying all deficiencies referred to in
paragraph 36 institutions should take into account all relevant deficiencies in methods,
processes, controls, data or IT systems that have been identified by the credit risk control
1) under category A:
i) potential bias stemming from the choice of the approach to calculating the
average of observed one year default rates in accordance with paragraph 80;
k) missing information for the purpose of estimating loss rates or for the purpose
of reflecting economic downturn in LGD estimates;
2) under category B:
———————————————– quotation(end)
——————————————— quotation(start)
42. The final MoC on a risk parameter estimate should reflect the uncertainty of the
estimation in all of the following categories:
———————————————– quotation(end)
July 12, 2019 : In this chapter, we briefly outline some design concepts and key properties of
the presented framework. The purpose of this chapter, as an post publication update, is to aid
understanding of the framework and address some commonly misunderstood concepts. There
is no change in the quantification methodology between this update and the previously
published version.
The risk parameters in scope are defined in the form of ratios of counting results. For example,
number of defaults vs. total number of observations, currency units of value lost vs. units of
value at risk, etc.
Often, such data are stored in different sources, or even groups of different sources each with a
different level of aggregation. The counting of units consisting of the numerator and
denominator, respectively, indicate the data deficiency in a transparent manner and so is the
quantification of the joint impact on the risk parameter.
In general, impact analysis of future market, economy or strategic changes on the risk
parameter, is challenging because different level of influence an institution has in the
numerator and denominator. For example in case of loss rate, while the lending institution
have more control over the level of total exposure, it is less likely to have such influence on the
amount of loss. Therefore, expert judgmental opinion on individual elements of the fraction
should be considered alongside numerical evidence for quantification of uncertainties.
The presented framework suggest and require identifying and counting of deficiencies by
combing through individual data records. The follow up quantification process starts with each
Appropriate Adjustment
The Appropriate Adjustment is an important term for the quantification of the MoC.
Paragraph 38 of the Guideline states that:
——————————————— quotation(start)
38. In order to overcome biases in risk parameter estimates stemming from the
identified deficiencies referred to in paragraphs 36 and 37, institutions should apply
adequate methodologies to correct the identified deficiencies to the extent possible. The
impact of these methodologies on the risk parameter (‘appropriate adjustment’), which
should result in a more accurate estimate of the risk parameter (‘best estimate’),
represents either an increase or a decrease in the value of the risk parameter. Institutions
should ensure and provide evidence that the application of an appropriate adjustment
results in a best estimate.
———————————————– quotation(end)
For example, in case of missing value in the data records, practitioners may choose to replace
the missing value with mean or median of the actual observed data. This is a methodology to
correct the deficiency, however, the appropriate adjustment in this case, is the impact of this
methodology on the risk parameter. The post-adjustment expectation of this impact is that
the practitioner obtains a more accurate estimate of the risk parameter, the ‘best estimate’, by
correcting the missing data deficiency.
While the mean/median value replacement methodology is vital for the deficiency correction,
and should be documented precisely, it is a methodology of correction rather than the
‘appropriate adjustment’.
——————————————— quotation(start)
———————————————– quotation(end)
We see that an appropriate adjustment regarding the default rate is expected in this case, to
reflect the biasness, then a MoC considered accordingly. It is obvious that the methodology of
correction is less likely to be able to reflect its own post-implementation impact in default rate.
The presented quantification process focus on the risk parameter, and the appropriate
adjustment as well as the best estimate before arriving at the MoC. We do not discuss which
deficiency is best corrected by which methodology nor measure the effectiveness of these
methodologies in this paper.
By applying the methodology of correction, the institution corrects the identified deficiencies
to the extent possible. The impact on the risk parameter is then calculated as discussed in this
paper, and the presented analysis and quantification framework is adopted accordingly.
The observed risk parameters are calculated directly from the dataset available. For example,
the default rate DR2 , is the number of defaults D, divided by the number of total observations
N. This is defined as the unconditional observed default rate.
The conditional observed default rate, is the same fraction calculation above under a
pre-defined condition. Imagine a dataset of global customers of default and non-default records
that has a ‘region’ label flagging EU and non-EU. The unconditional observed default rate, is
calculated as D
N , using all records in the dataset. To isolate the observed impact of EU data
from non-EU data and report the default rate for regions outside the EU, one needs to first
exclude all EU records from the dataset, then calculate Dnon-EU /Nnon-EU . Defined on dataset D
as DR(D|non-EU), this is the conditional observed default rate where the condition is: non-EU.
We are specifically interested in the difference between DR(D) and DR(D|non-EU), this
difference is obviously caused by and only by the condition applied. Note that this is NOT to
say that the numerical difference is the impact of the condition, rather, one may wish to
consider that, the impact of the condition, from a default rate perspective, is completely
embedded in the comparison of the two ratios. Hence in a loose way, one may conclude that
the impact of “EU” region on the default rate subject to dataset D, is captured between
DR(D) and DR(D|non-EU).
Similarly, with a ‘deficiency trigger’ label instead of ‘region’ label in the above simple example,
one is able to calculate the default rates conditional on each and all of the deficiency triggers,
thus capture the impact of deficiencies using DR(D) and DR(D|non-deficient) conditioning on
each of the identified deficiencies.
2
Note that any mathematical notation used in this chapter is only for illustration purpose of the rational behind
the proposed framework, detailed definition of notations employed starts from next chapter onwards.
Additional details to elaborate the focus on the conditional default rate as following: take a
simple 5-record dataset for example, the records are numbered only for discussion purposes.
Let us attempt a logistic regression model with the dataset. The dependent variable is the
“Default” indicated by 0 and 1. The independent variables are labeled as “Var 1”, “Var 2” and
“Var 3”, and we see “Var 1” has a missing value for record no.3 and “Var 2” has a missing
value for record no.4. As shown in Table 1:
Having no methodology of correction for the identified missing value deficiency and passing the
dataset directly to any statistical software, one either gets an error for including the missing
value, or gets a model as the software by default implicitly remove the records affected by
missing value deficiency. Thus the real effective records is only records 1, 2 and 5, rather than
the whole set of 5 records. As a result, one could conclude that while the observed default rate
with deficiency in place is 2 out of 5, the implicit default rate modelled in the logistic model is
only 1 out of 3.
We consider this as the raw impact of the missing data deficiency because once a methodology
for correction is introduced, the uncertainty of missing value, is shadowed by the uncertainty of
the methodology of correction. As seen in the example above, the difference between 2/5 and
1/3 will only change with the observed missing value. However, the impact assessment after
introducing methodology of correction is likely to change alongside the choice of methodology.
The presented framework assume that the institution is able to clearly identify deficiencies in
own data and business environment. Methodologies for further aggregation and calculation of
the risk parameters are subject to relative sections of the guideline.
As discussed above, it is anti-intuitive to assume that Category A and B deficiencies are results
of a statistical model, e.g. to assume that missing default date or market condition change is
caused by use of a particular statistical model. Meantime, it is obvious that the general
estimation error is model dependent, as one needs a model to get an estimate, but not
The figure below illustrate how the presented MoC quantification process integrate with the
model development process.
The modular design for the quantification process echo with the independence of trigger
assessment and help to avoid double counting. An independently defined MoC process flow is
efficient in terms of framework implementation, it also allows a straightforward integration to
the institutions existing monitoring and review processes.
The proposed framework works well with, but does not rely on specific model or distributional
assumptions, nor sampling and re-sampling methods. As the institution action according to the
remediation plan to fix the identified deficiencies, the impact is directly reflected in the updated
MoC results and thus the direction of change in MoC is inline with the mitigation progress.
The presented framework perform explicit MoC quantification on top of the risk parameter
estimation. The calculation is easily reversible from the MoC, the pure statistical model
estimate, all the way back to individual deficiency and each correction and adjustment
involved per deficiency.
Paragraph 43 of the Guideline states that the MoC quantification of the General Estimation
Error, is expected at the level of every calibration segment. Also, this MoC is expected to
reflect the dispersion of the distribution of the statistical estimator.
Further, Paragraphs 92 and 161, on the calibration of PD and LGD respectively, stated that
the calibration is expected to meet the long-run average of risk parameter at the level of grade
or pool, or the level of calibration segment.
In practice, it is possible that the segments used to calculate the observed risk parameter not
matching the calibration segment due to strategic or criteria change in data history. The
proposed counting and aggregation assessment support quantification according to segment
changes.
While the presented framework is capable to work with simulation or distribution based
approaches, it does not depend on specific assumptions hence avoid introducing additional bias
and uncertainty into the calibration segments. The feasibility of employing such approaches
should be tested separately on a per portfolio basis.
The presented framework support appropriate adjustment and MoC quantification at both
overall and segment level.
So far we are clear about the differences between ‘methodology of correction’, ‘appropriate
adjustment’ and the MoC.
——————————————— quotation(start)
50. Institutions should regularly monitor the levels of the MoC. The adoption of a MoC
by institutions should not replace the need to address the causes of errors or
uncertainties, or to correct the models to ensure their full compliance with the
requirements of Regulation (EU) No 575/2013. Following an assessment of the
deficiencies or the sources of uncertainty, institutions should develop a plan to rectify the
data and methodological deficiencies as well as any other potential source of additional
uncertainty and reduce the estimation errors within a reasonable timeframe, taking into
consideration the materiality of the estimation error and the materiality of the rating
system.
———————————————– quotation(end)
Our understanding of Paragraph 50 is that the plan to rectify the data and methodological
deficiencies refers to a plan aiming to address the cause of deficiency, rather than simply
adding a margin.
It is clear that the fixing of individual deficiencies should be done by remediating the cause of
the deficiency. Same applies to the statistical models, in the meantime, we acknowledge that
models will eventually fail at some stage and rebuild is inevitable. It is clear that modelling
issues can only be addressed by modelling approach rather than using the MoC as a patch.
The presented framework does not serve as an approach to address the cause of deficiencies
and uncertainties, also it does not serve to correct model failures, a remediation plan as
described in the regulatory text should be sought to address the deficiency.
We assume that the statistical model development, together with the model calibration, is
compliant and meet the technical requirements before we consider the Category C uncertainty.
In particular, we highlight here that the estimation of the risk parameter is expected to be
compliant with the regulatory text as mentioned in Chapters 5 and 6 of the EBA Guidelines 1 .
Aside from self identified deficiencies in this category. The proposed forms of general
estimation error start with the following:
Denote sample1 by s1 and present ranking pair in parenthesis like (s1 , s2 ). Under the
Target Ranking order, the list of unique ranking pairs considered is: (s1 , s2 ), (s1 , s3 ),
(s1 , s4 ), (s2 , s3 ), (s2 , s4 ), (s3 , s4 ).
The Calibrated Ranking is concordant for pairs: (s1 , s2 ), (s1 , s3 ), (s1 , s4 ), while
discordant for pair (s2 , s3 ). Pair (s2 , s4 ) is a pair of tie in the Calibrated Ranking and
pair (s3 , s4 ) is a pair of tie in the Target Ranking.
Note that the term Calibrated Ranking is only used in the example to indicate that the
ranking is post-calibration, the rank ordering error could happen in both model
estimation or calibration stages, here we only focus on the rank of the final model output
to measure the uncertainty in rank ordering.
The error is quantified using the concordant/discordant pairs and the difference in the
post-calibration risk parameter of the discordant pairs. Detailed definition can be found
in Section 5.2, Section 7 and Section 9.
10
However, in Scenario 1, the calibration simply assign all calibration samples with
the target value. In Scenario 2, a specific symmetrical distribution is chosen, and in
Scenario 3, a general form of linear transform function is applied for each sample in
the calibration dataset.
Focus on Scenario 3 from the above figure, we see that the distribution is skewed
with a long right tail, and hence the next question.
2) Is the arithmetic mean a good statistical estimator of central tendency for the
calibration sample?
The answer is negative for a significantly skewed distribution. Often, the median is
preferred in this case. However, in this example, the calibration is already
performed using the estimator of mean as central tendency. One potential approach
to address this uncertainty is derived using the known property that, for a
distribution with finite variance, the absolute difference between mean and median
has an upper limit equal to the standard deviation of the distribution. Denote the
mean by µ and median by Med, Colin 6 has shown that:
p
|µ − Med| = |E(X − Med)| ≤ E(|X − Med|) ≤ E(|X − µ|) ≤ E [(X − µ)2 ] = σ
More details on the proposed assessment of the calibration error can be found in
Section 5.3, Section 7 and Section 9.
11
In summary of the above mentioned basic concepts and building blocks, one is able to achieve
the following with the presented framework:
3) Non-modelling update of the MoC does not require re-estimating the model.
4) Flexibility incorporating expert opinion, yet does not depend on assumptions such as
statistical distribution or choice of confidence interval.
5) Straightforward to remove the MoC from final risk parameter when the pure model
estimate is required for purposes such as model review or stress testing.
Part I. PD Estimation
——————————————— quotation(start)
73. For the purpose of calculating the one-year default rate referred to in point (78) of
Article 4(1) of Regulation (EU) No 575/2013, institutions should ensure both of the
following:
1) that the denominator consists of the number of non-defaulted obligors with any
credit obligation observed at the beginning of the one-year observation period; in this
context a credit obligation refers to both of the following:
a) any on balance sheet item, including any amount of principal, interest and fees;
2) that the numerator includes all those obligors considered in the denominator that
had at least one default event during the one-year observation period.
———————————————– quotation(end)
12
where:
DR1−year
y : The one-year DR for year y
EBA Guidelines 1 Sections 5.3.3 and 5.3.4 cover the calculation of observed average default
rates and the long-run average default rate, where the long-run average default rate is observed
average default rate spanning cross the historical observation period. Hence, the calculation of
long-run average DR over Y years is expressed as:
long−run
DR1−year
1 + DR1−year
2 + . . . + DR1−year
y + ...
DR = (2)
Y
D1 D2 Dy Y
N + N2 + . . . + Ny + . . . 1 X Dy
= 1 = · , for y ∈ [1, Y]
Y Y Ny
y=1
where:
y, DR1−year
y , Dy and Ny : as defined above
Let tA ∈ (1, . . . , k) be the triggers in Category A as defined in Paragraph 42 covering all but
not limited to Category A triggers listed in Paragraph 37 where k is the total number of
triggers, denote the number of defaulted records affected by each of the triggers by dtA , and
the number of non-defaulted records affected by each trigger by mtA .
Omitting the year indicator i in this general form, we adjust Equation (1) to obtain the best
estimate of the 1-year default rate as following:
D + k dtA
P
1−year
DR
g = , for dtA ≥ 0, and mtA ≥ 0 (3)
N + k mtA
P
Updating Equation 2 we obtain the best estimate of the long-run default rate as:
Y
glong−run = 1 · g1−year ,
X
DR DR y for y ∈ [1, Y] (4)
Y
y=1
13
Proof. Denote k dtA = d and k mtA = m in this proof for simplicity, we calculate the
P P
difference between the raw estimate and the best estimate of the 1-year default rate DR1−year
as following:
D + k dtA
P
1−year 1−year D D+d D
DRtA
g − DR = Pk t − = −
N+ mA N N +m N
(D + d) · N − D · (N + m) d·N −D·m
= =
(N + m) · N N · (N + m)
d·N −D·m d·N D·m d
= · = 1− ·
d·N N · (N + m) d·N N +m
m d d
= 1− / · (8)
N D N +m
Substituting the β tA and the αtA quantities, Equation (8) can be further extended as:
αtA β tA · D D β tA − αtA
1−year 1−year
DRtA
g − DR = 1− t · = ·
βA N + αtA · N N 1 + αtA
β tA − αtA
= DR1−year · (9)
1 + αtA
14
With interchangeable notations, it is obvious that Proposition (1) holds for Category B with
triggers denoted as tB , where tB ∈ (k + 1, . . . , s) is the triggers in Category B as defined in
Paragraph 42 covering all but not limited to Category B triggers listed in Paragraph 37 where
k is the total number of triggers in Category A and s − k is the total number of triggers in
Category B.
The methodology for appropriate adjustment and the best estimate for any additional
categories must consider the impact on the deficiencies identified already in the estimation of
risk parameter, the best estimate of 1-year default rate with single Category adjustment in this
case.
15
With updated notation, the Relative Uncertainty in Defaults for all triggers is:
s Ps t
X
t d
β = , for t ∈ [1, s], and dt ≥ 0
D
and the Relative Uncertainty in Non-default Observations is:
s Ps t
X
t m
α = , for t ∈ [1, s], and mt ≥ 0
N
The appropriate adjustment for 1-year average default rate considering uncertainty in all
identified deficiency triggers, can be expressed as:
s s
1 + s βt
P X X
AA = Ps t , for t ∈ [1, s], β t , αt , βt, αt > −1 (15)
1+ α
Hence the best estimate 1-year default rate, considering uncertainty in all identified deficiency
triggers, is:
Ps t s s
1−year 1−year 1 + P β
X X
1−year t t t
DR
g = DR ·AA = DR · , for t ∈ [1, s], β , α , β, αt > −1 (16)
1 + s αt
While the uncertainty in each identified deficiency is defined as β t and αt , relative to D and N
respectively. Note that condition β t , αt > −1 is a logical condition rather than a mathematical
one, as β t = −1 or αt = −1 means one does not trust the entirety of the default or non-default
observations.
Proof.
16
• Equation (16) is obtained by re-writing Equation (13) with trigger level notations.
Section 5.3.5 of the EBA Guidelines 1 detail requirements of the calibration, and set the
calibration target to the long-run average default rate. Paragraphs 89 and 90 clarified that the
calibration is to be conducted before application of MoC or floor and master scale mapping
may be performed during the calibration. In Paragraph 91, Article 169(3) or Regulation (EU)
No 575/2013 is referenced, assuming a continuous rating scale for in case direct use of
estimated risk parameter.
In this paper, we explore the common approach of calibrating to a pre-defined master scale.
Analysis presented in here can be altered to cover other types of calibration as mentioned in
Paragraph 91.
Targeting the long-run average default rate, before application of MoC, we can express the
targeting function mathematically as following:
n̂1 · p1 + . . . + n̂b · pb + . . . + n̂B · pB
= DRlong−run (17)
n
n: size of calibration sample
b, B: B is the total number of master scale grades/ranks, b ∈ [1, B] indicate the bth
grade/rank
pb : the probability of default assigned for the bth master scale grade/rank
n̂b : calibrated population for the bth master scale grade/rank, B n̂b = n
P
To avoid influence the rank ordering of obligors in the calibration sample, we consider only the
estimated and calibrated results of the calibration exercise. Table 3 conceptually illustrate the
calibrated result comparing to the calibration target and highlight the types of uncertainty
which we will focus on for the rest of the section.
For the purpose of the MoC framework, we propose to classify the general estimation error in 2
types:
c1 : error in rank order estimation, c1 denote quantified measure of this type of error;
17
Target PD Calibrated PD
sample1 ps1 p̂s1
c2
sample2 ps2 p̂s2
δ p̂: the difference of calibrated PD between the discordant pair, e.g. p̂sn − p̂s1 in
Table (3) if the rank ordering is discordant for the pair;
n: size of portfolio
ti : number of tied values in the ith group of ties for the target rank
uj : number of tied values in the j th group of ties for the calibrated rank
Detailed definition of discordant pairs and the count of ties can be found in literature
introducing the Kendall’s τ , hence not discussed here. Note that the constant 2 in the
nominator is sometimes included as part of calculation in the denominator depending on
choice of implementation.
Proposition 5 The estimation error considered here is the dispersion of the calibrated mean
PD: v
u " B #
u1 X
2 long−run 2
c2 = t · ωb · pb − (DR ) (19)
n
1
18
Proof. The quantified dispersion, c2 , is the standard deviation of the calibrated mean PD,
for a successful calibration supported by quantitative and qualitative tests as mentioned in
Paragraph 87, the calibrated mean PD is expected to be equal to the calibration target,
DRlong−run .
Here we briefly explain the rational of this proposal. Recall the target function of the
calibration exercise as shown in Equation (17):
n̂1 · p1 + . . . + n̂b · pb + . . . + n̂B · pB
= DRlong−run
n
Simplify the notation of DRlong−run as T only for this proof, we can re-write the above
equation as:
n̂1 · (p1 − T ) + . . . + n̂b · (pb − T ) + . . . + n̂B · (pB − T )
=0 (20)
n
Denote variables xb = pb − T , and given that ωb = n̂b /n for b ∈ [1, B], the left-hand side of
Equation (20) is simply the weighted average of variable xb : B ωb xb . We can calculate the
P
s s v
B B u " B #
X X x b − x̄ u1 X
σ= ωb2 σb2 = ωb2 ( )2 = t · ωb · p2b − T 2 (21)
n̂b n
1
Eq (19) q.e.d.
long−run
Therefore, the mean of the calibrated PD lies in the range DR ± σ.
The calculation of economic loss and realised LGD is defined in Section 6.3.1 of the EBA
Guidelines 1 .
——————————————— quotation(start)
———————————————– quotation(end)
19
Section 6.3.2.2, and in particular Paragraph 150 define the calculation of long-run average
LGD.
——————————————— quotation(start)
———————————————– quotation(end)
LGDlong−run : the long-run average LGD for the historical observation period
M, i: the total number of defaults in the scope of LGD estimation during the historical
observation period, i ∈ [1, M ]
20
Recall Equation (15) and (16) with updated definition of notation for LGD estimation, we use
a footnote to highlight the application for LGD estimation only in this instance:
s s
1 + s βt
P X X
AArLGD = Ps t , for t ∈ [1, s], β t , αt , βt, αt > −1
1+ α
s s
1 + s βt
P X X
t t t
rLGD = rLGD · AArLGD = rLGD ·
^ , for t ∈ [1, s], β , α , β, αt > −1
1 + s αt
P
Hence, we found that the general form of appropriate adjustment and best estimate for
identified deficiencies in Category A and B applicable for LGD estimation.
Category C Deficiencies
With minimum update to Table 3, we illustrate the Category C uncertainty of LGD estimation
in Table 4.
Table 4: Illustration of calibrated results vs. the sample target for LGD estimation
We found Equation (18) applies for LGD estimation using difference of LGD in the discord
ranks instead of PD.
P dis c
LGD 2 · n |δ lgd|
c1 = q P P (25)
[n(n − 1) − i ti (ti − 1)][n(n − 1) − j uj (uj − 1)]
sn s1
Here δ lgd
c is the difference of calibrated LGD between the discordant pair, e.g. LGD
[ − LGD
[
in Table 4 if the rank ordering is discordant for the pair.
With consideration of the requirements as stated in Section 6.3.2.2, the target function for the
calibration exercise for LGD is expressed as following:
LGD
[ s1 + . . . + LGD
[ sj + . . . + LGD
[ sn
= LGDlong−run (26)
n
21
The target function of calibration, Equation (26), is the size n equal weighted average of the
calibrated LGD. Hence the standard deviation for the sample mean µ̂ after calibration can be
obtained by: v v
n
u n
2
σLGD
uX uX
LGD
u 2 2 1 [
c2 = σµ̂ = t ωj σ [ = σLGD[ ·
t = √ (27)
LGD n n
j=1 j=1
The requirement of MoC for Credit Conversion Factor (CCF) is specified in the ECB TRIM
Guide 2 Paragraph 100:
——————————————— quotation(start)
100. Institutions are expected to have in place a MoC framework in line with the EBA
CP on GLs 23 to 35. This principle is also applicable to the estimation of CCFs.
———————————————– quotation(end)
To clarify, all of the “Paragraphs” referenced in this section refers to the regulatory text as
mentioned in the ECB TRIM Guide 2 document by default, unless specified otherwise.
Realised CCF
——————————————— quotation(start)
94. The EAD for undrawn commitments is calculated as the committed but undrawn
amount multiplied by a CCF. CCFs can also be derived from direct estimates of total
facility EAD.
———————————————– quotation(end)
22
Cundrawn
r : committed but undrawn amount on reference date r, here this is equal to the
total committed limit Limitr take away the utilized drawn amount, i.e.
Cundrawn
r = Limitr − Drawnr
The above expression is in agreement with the widely known Loan Equivalent parameter
(LEQ) in which the total EAD is defined as the drawn amount plus the undrawn amount
multiplied by an LEQ factor, Qi 5 defined this approach as following:
Meantime, the CCF as defined in alternative estimation approaches might come in different
forms other than Equation (28) and (29) above. For example, Moral 3 suggested the CCF as
ratio of EAD and the total committed amount Limitr :
EAD
CCFr =
Limitr
while Jacobs 4 defined the CCF as EAD weighted by the drawn amount Drawnr at reference
time r:
EAD
CCFr =
Drawnr
The reference date and estimation approach of the long-run average could differ according to
the choice of reference date of risk drivers. As mentioned in Paragraph 97:
——————————————— quotation(start)
97. Institutions should analyse the risk drivers not only at twelve months prior to
default (the fixed horizon approach) but also within the year before default (the cohort
approach). When choosing the appropriate reference date for a risk driver, institutions
should take into account its volatility over time.
———————————————– quotation(end)
23
CCFlong−run
fth : the fixed time horizon approach long-run average CCF for the historical
observation period
M, i: the total number of defaulted facilities in the scope of CCF estimation during the
historical observation period, i ∈ [1, M ]
The reference date for the cohort approach is the first day of the cohort year for all defaults
observed in the following 12 months. In this case the long-run average CCF is the annual
average CCF weighted by the number of years in the historical observation period.
CCFlong−run
cohort : the cohort approach long-run average CCF for the historical observation
period
rCCFy : the cohort approach annual average CCF for the year y
Y, y: the total number of cohort years Y in the historical observation period, y ∈ [1, Y ]
As shown above, we found that the CCF estimation approach defined in the Paragraph 94, as
well as some of the common accepted CCF estimation approaches in literature all have the
CCF factor defined in the form of a ratio.
Hence, define β t as the relative uncertainty in the nominator of the CCF calculation, i.e.
EAD or EADundrawn , and define αt as the relative uncertainty in the denominator, i.e.
Cundrawn
r , Limitr or Drawnr .
Recall Equation (15) and (16) with updated definition of notation for CCF estimation, we use
a footnote to highlight the application for CCF estimation only in this instance:
s s
1 + s βt
P X X
t t t
AArCCF = , for t ∈ [1, s], β , α , β, αt > −1
1 + s αt
P
s s
1 + s βt
P X X
t t t
rCCF = rCCF · AArCCF = rCCF ·
^ , for t ∈ [1, s], β , α , β, αt > −1
1 + s αt
P
24
Category C Deficiencies
Similar to the previous illustration shown in Tables 3 and 4, with updated notations
Equations (18) applies for CCF estimation.
Pndis
2· |δ ccf
d|
CCF
c1 =q P P (32)
[n(n − 1) − i ti (ti − 1)][n(n − 1) − j uj (uj − 1)]
Here δ ccf
d is the difference of calibrated CCF parameter between the discordant pair.
The target function for the calibration exercise for CCF is comparable with that of the LGD
estimation:
[ s1 + . . . + CCF
CCF [ sj + . . . + CCF
[ sn
= CCFlong−run (33)
n
where CCF
[ sj is the CCF estimated and calibrated for the jth calibration sample.
The standard deviation for the sample mean µ̂ after calibration can be obtained by:
v v
n
u n
2
σCCF
uX uX
CCF
u 2 2 1 [
c2 = σµ̂ = t ωj σ [ = σCCF[ ·t = √ (34)
CCF n n
j=1 j=1
For the sake of generalization, we use RP to denote the Risk Parameter in either PD, LGD or
CCF estimation, where RP f is its best estimate and RP
c is the calibrated result.
Appropriate Adjustment
s s
1 + s βt
P X X
AA = , for t ∈ [1, s], and β t , αt , βt, αt > −1 (35)
1 + s αt
P
Best Estimate
s s
1 + s βt
P X X
RP = RP · AA = RP ·
f , for t ∈ [1, s], and β t , αt , βt, αt > −1 (36)
1 + s αt
P
25
v
u B
uX
c2 = σµ̂ = t ωb2 σ 2c , B and b for both pre-defined and continuous grades (38)
RP
b=1
We start by calculating the value of appropriate adjustment per trigger level for τ ∈ [1, s] and
t ∈ [1, τ ]:
1+Ps β t 1+ s−1 β t
P P
δ f s
RP| s−1 : RP · ( 1+ s
α t − 1+ s−1 αt
P ) = RP · (AAs − AAs−1 ) = RP · δAA|ss−1
··· : ···
1+Pτ β t 1+ τ −1 β t
P P
δ f τ
RP| τ −1 : RP · ( τ t − Pτ −1 t ) = RP · (AAτ − AAτ −1 ) = RP · δAA|ττ −1
1+ α 1+ α
· · · : · · ·
f 1 : RP · ( 1+β tt − 1+0 ) = RP · (AA1 − AA0 ) = RP · δAA|1
δ RP|
0 1+α 1+0 0
f τ ≥ 0 condition on δAA|τ ≥ 0.
We see that δ RP|τ −1 τ −1
Denote δAA+ for δAA ∈ max(0, δAA), and δAA− for δAA < 0. Therefore:
f + if δAA+
(
δ RP
f − if δAA−
δ RP
Hence we propose the value increase because of Category A and B uncertainties in the
following 2 parts:
- the quantity that was not decreased as a conservative choice of the bank:
This is equal to all of the decrease because of the appropriate adjustment applied, i.e.
RP · δAA− .
P
26
As discussed in previous section, the Category C uncertainties are measured by c1 and c2 .
Meantime, as shown in Tables (3) and (4), the calibration/estimation error (c2 ) is within the
sample-vs-calibrated pair and the rank ordering error (c1 ) is across sample pairs, therefore
independent from each other.
Note that c1 and c2 estimated here are subject to the realised target risk parameter.
Final MoC
——————————————— quotation(start)
44. For the purpose of paragraph 43(a) and for each of the categories A and B,
institutions may group all or selected deficiencies, where justified, for the purpose of
quantifying MoC.
45. Institutions should quantify the final MoC as the sum of:
3) the MoC for the general estimation error (category C) as referred to in paragraph
43(b).
46. Institutions should add the final MoC to the best estimate of the risk parameter.
———————————————– quotation(end)
Note that so far we have the MoCA,B on individual realised risk parameter level, e.g. 1-year
default rate, single exposure LGD, or single facility CCF. Meantime, the MoCC is subject to
the target long-run average risk parameter.
In order to obtain the final MoC which can be used as margin of conservatism for future
estimated risk parameters, we first calculate the best estimated long-run risk parameter as
following:
long−run f 1 + RP
RP f 2 + . . . + RP
f LRO
RP
f =
LRO
RP1 · AA1 + RP2 · AA2 + . . . + RPLRO · AALRO
= (41)
LRO
27
Similar to Equation (7) we define the best estimate long-run average risk parameter as the raw
long-run average risk parameter multiplied by the long-run appropriate adjustment:
long−run long−run
f long−run
RP
long−run long−run
RP
f = RP · AA ⇒ AAA,B = (42)
RPlong−run
Note that calculation using Equation (42) can be performed recursively through the list of
uncertainty triggers to obtain trigger level long-run appropriate adjustment for each identified
deficiency.
Next, we calculate the best estimate long-run risk parameter with individual MoC:
Note here we arrange Equation (43) as is because it is imporant to highlight here that the
long-run MoC for Categories A and B is NOT to be express as the second term in the last row
of equation.
MoC1A,B + . . . + MoCLRO
A,B
MoClong−run
A,B 6
= (44)
LRO
Recall that we define the margin of conservatism in this framework as the trigger level negative
adjustment which was not decreased. Hence the long-run MoC for Category A and B, subject
to the raw estimate historical risk parameter RPlong−run as following is:
s h
long−run −
i
MoClong−run
X
A,B =− δ RP
f , for triggers t ∈ [1, s] (45)
t=1
long−run
long−run MoCA,B
RMoCA,B = (46)
RPlong−run
Now we calculate the MoC for Category C, also relative to the raw estimate historical risk
parameter RPlong−run as following:
28
12 Conclusions
In this paper, we follow the EBA documents regarding the guidelines that apply from 1
January 2021 and propose a framework to quantify, document and monitor the impact of
uncertainties relevant to the IRB PD, LGD and CCF estimation. Following the categorization
of deficiency types, we derived a general form methodology of appropriate adjustment, best
estimate and final MoC that is intuitive, flexible and transparent to the institution.
The framework comes with mathematical properties that is complaint with the guideline
requirements not only in MoC related sections of the EBA Guidelines 1 document, but also
sections related to PD, LGD and CCF estimation and calibration.
When used as a measure of impact and track the effect of remediation actions on the deficiency
triggers, the modularized quantification approach allows to track changes per trigger therefore
allow timely monitoring of the uncertainty. Meanwhile, the framework support processing each
deficiency indicator sequentially in implementation, this allows capture and quantify of any
changes observed in the system at desired frequency for either reporting or modelling purposes.
References
1 EBA Guidelines. “Guidelines on PD estimation, LGD estimation and the treatment of
defaulted exposures”. European Banking Authority, 23 April, 2018.
2 ECB TRIM Guide. “Guide for the Targeted Review of Internal Models (TRIM)”.
European Central Bank, February, 2017.
3 G. Moral. “EAD Estimates for Facilities with Explicit Limits”. in Engelmann, B. and
Rauhmeier, R., eds. The Basel II Risk Parameters: Estimation, Validation, and Stress
Testing, New York: Springer, pages 197-242
4 M. Jacobs Jr. “An Empirical Study of Exposure at Default”. Office of the Comptroller of
the Currency, U.S. Department of the Treasury
5 Min Qi. “Exposure at default of unsecured credit cards”. Office of the Comptroller of the
Currency, U.S. Department of the Treasury
6 Colin Mallows. “Another comment on O’Cinneide”. The American Statistician. 45 (3): 257
29