0% found this document useful (0 votes)
78 views33 pages

Kosmo MQP Final Report

This document analyzes how trade compression affects risk propagation in over-the-counter derivative markets. Trade compression reduces gross exposures between counterparties while keeping net exposures the same. While compression lowers gross exposures, its impact on systemic risk is unknown. The paper develops models of different market structures and uses these models to simulate risk propagation after a default and compare losses with and without compression.

Uploaded by

Rohit Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views33 pages

Kosmo MQP Final Report

This document analyzes how trade compression affects risk propagation in over-the-counter derivative markets. Trade compression reduces gross exposures between counterparties while keeping net exposures the same. While compression lowers gross exposures, its impact on systemic risk is unknown. The paper develops models of different market structures and uses these models to simulate risk propagation after a default and compare losses with and without compression.

Uploaded by

Rohit Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Analyzing the Effects of Trade Compression on

Risk Propagation in Over-the-Counter Derivative


Markets

Stephen Kosmo

April 25, 2018

Abstract
Following the 2008 financial crisis, a new and mostly unstudied technique has
become a central tenet of today’s financial markets: portfolio trade compression.
Trade compression is a service offered by third party vendors that lowers a
bank’s gross notional exposures, while keeping net exposures the same. However,
the effects of compression on systemic risk are unknown. In order to test the
effectiveness of trade compression in risk mitigation, we compare the loss after
default in markets with a variety of structures.

1
Contents
1 Background 4

2 Market Structures 6
2.1 Bilateral Market Model . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Single Central Clearing Party Model . . . . . . . . . . . . . . . . 10
2.3 Multiple Central Clearing Party Market . . . . . . . . . . . . . . 11

3 Trade Compression Overview 11

4 Modeling Overview 13
4.1 Compression Models . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Risk Propagation Model . . . . . . . . . . . . . . . . . . . . . . . 16

5 Results 18

6 Conclusions 26

7 References 28

A Proofs 30
A.1 Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A.2 Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

B Pseudocode for Algorithms 32


B.1 Network Simplex . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
B.2 Non-conservative Compression (L1 Minimization) . . . . . . . . . 32
B.3 Risk Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2
List of Figures
1 A graphical example of compression (from: D’Errico, Roukny) . 5
2 Claimed reductions in counterparty risk exposures after the use
of triReduce on uncleared trades [Source: TriOptima (2017)] . . 6
3 The weighted adjacency matrix for a given market with 10 coun-
terparties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 The graph representation of the given adjacency matrix (edge
darkness indicates weight) . . . . . . . . . . . . . . . . . . . . . . 8
5 A centrally cleared version of the market in figure 4 . . . . . . . 10
6 Bilateral IRS market loss after triggering default (title market in
blue, other markets in gray), and the distribution of losses . . . . 20
7 Centrally cleared IRS market Loss after triggering default (title
market in blue, other markets in gray), and the distribution of
results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
8 Bilateral Forex market loss after triggering default (title market
in blue, other markets in gray), and the distribution of losses . . 22
9 Centrally cleared Forex market Loss after triggering default (title
market in blue, other markets in gray), and the distribution of
losses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
10 Bilateral Credit market loss after triggering default (title market
in blue, other markets in gray), and the distribution of losses . . 24
11 Centrally cleared Credit market Loss after triggering default (title
market in blue, other markets in gray), and the distribution of
losses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

List of Tables
1 Value lost calculated from risk propagation model in each market.
Average of results from 10,000 market chain (Loss in millions $) 19
2 The difference in average value lost compared to the base (bilat-
eral) market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3
1 Background
In financial markets, participants can take additional risks by writing
Over-the-Counter (OTC) derivatives, which come in the form of swaps, for-
wards, futures, and options. These contracts increase profit or loss by betting
on change in an underlying asset. Many experts agree that it was the use of
OTC derivatives that lead to the financial crisis of 2007-2008; large institutions
wrote these OTC derivative contracts to bet against mortgage defaults. The
most notable of these institutions was Lehman Brothers, who leveraged their
assets 44:1 trading credit default swaps. However, the risk associated with these
contracts was not well understood; these institutions thought default was highly
unlikely, and thus looked at the OTC derivative market as virtually risk free.
Unfortunately, this was not the case. In the first quarters of 2008, many people
started to default on mortgage payments, causing Lehman, and many others,
to lose out on their positions. Lehman contacted other lenders, such as Bank
of America and the London based Barclays, looking for a buyout, but no offer
was made. Due to Lehman’s large level of leverage, and the lack of a buyout,
they did not have the physical capital to pay their losses. As a result, Lehman
defaulted, creating huge losses for institutions that held contracts with Lehman.
AIG was one such institution. The lack of payout from Lehman to AIG would
have caused the bankruptcy of AIG. Due to a bailout, this did not happen,
but had AIG not been bailed out, losses would have even further propagated
throughout the system, causing more institutions to default.
In response to the financial crisis that followed, the US passed the Dodd-
Frank act that mandated the clearing of certain OTC derivatives, alongside
many other regulations. This means that institutions trading OTC derivatives
now have to go through a central counterparty (CCP), which keeps various
default safety funds to protect against the kind of leveraging that lead to the
Lehman brothers default. International policy changed as well. For example, the
EU passed EMIR mandating the clearing of various classes of OTC derivatives.
In addition to clearing, banks started applying other risk management prac-

4
tices in the form of portfolio trade compression, a key tool in handling the fallout
from the financial crisis. Due to the use of portfolio compression, Lehman trade
positions were considerably smaller than their gross totals: while cleaning up
trades in October 2008, after the default, CLS Group, a third party middleman
for interbank transactions, processed $5.2 billion in net settlements, correspond-
ing to $72 billion notional amount (London Clearing House, 2012). In addition,
trade compression has gained traction since the financial crisis: TriOptima and
LCH.Clearnet Limited (LCH.Clearnet) compressed out $110 trillion in total no-
tional volume in EUR, JPY, GBP and USD interest rate swaps... using TriOp-
tima’s triReduce since 2008 (TriOptima, 2017). However, while it is clear that
trade compression can significantly reduce market exposure levels, the effect of
trade compression on systemic risk is unclear.
Theoretically, trade compression looks to eliminate chains of trades in a
network.

B B

10 5

5 C 5 D C 5 D

10 5

A A

(a) Before Compression (b) After Compression

Figure 1: A graphical example of compression (from: D’Errico, Roukny)

Ideally, compression would eliminate all such cycles from a network. How-
ever, the complexity of markets often prevents this (D’Errico, 2017). Therefore,
firms offering compression services often use a conservative approach to com-
pression, wherein trades are only removed to a given extent.
Currently, the main provider of trade compression is TriOptima, with over
260 clients globally (TriOptima, 2017). LCH, SwapClear and CLS all have deals

5
with TriOptima to use their service on cleared and settled trades respectively.
TriOptima’s compression service, triReduce, uses a hybrid of conservative and
nonconservative compression, cycling through dealer and client trades and com-
presses trades based on their own constraints, as well as constraints set by cus-
tomers detailing the exposures they are open to taking on (TriOptima, 2017).
According to TriOptima, triReduce has greatly reduced counterparty exposure.
Figure 2 shows the exposure levels (z-axis) between two counterparties (x-axis
and y-axis intersection) before (left) and after (right) applying compression.

Figure 2: Claimed reductions in counterparty risk exposures after the use of triRe-
duce on uncleared trades [Source: TriOptima (2017)]

While figure 2 demonstrates a reduction in gross exposures. It is unclear


if systemic risk is lowered; a reduction in exposure may not correlate to a de-
crease in risk of default. It is the goal of this paper to provide a framework for
understanding default risk in trade compressed OTC derivative markets.

2 Market Structures
In order to test trade compression, we must model various asset classes of
OTC derivatives. First, we model the bilateral case; to this, we can apply each

6
form of compression, and then central clearing. In a discussion with Roukny,
he stressed that ”the interaction between central clearing and compression is
not well understood”, so the models presented in this paper are a simplified
case where compression is only applied to markets prior to clearing (personal
communication, Oct 30, 2017).
To define the base structure for the markets presented in this paper, we
consider the weighted adjacency matrix of counterparty exposures E. In this
matrix, a given counterparty i has exposures given by the corresponding row i,
where an expected inflow of capital is a positive position, whereas an outflow is
negative. Note then, that this adjacency matrix will be skew-symmetric, as can
be seen in the example matrix given in figure 3.

Figure 3: The weighted adjacency matrix for a given market with 10 counterparties

This adjacency matrix can then be represented as directed market graph,


where directions denote the flow of capital, i.e. an arrow from counterparty i to
counterparty j denotes a trade where i is expected to pay j an amount equal to
the weight of the edge between the two parties. Figure 4 is an example of the
directed graph defined by the matrix in figure 3.

7
Figure 4: The graph representation of the given adjacency matrix (edge darkness
indicates weight)

Note that as each row in the adjacency matrix corresponds to a different


counterparty’s positions, the net assets ai and net liabilities li for a given insti-
tution i are simply the sum of row i and column i respectively. Thus we define
for counterparty i the assets ai and liabilities li as

n
X n
X
ai = eij li = eji
j=1 j=1

where eij is row i, column j of the matrix E.


In other words, net assets are row sums, and net liabilities are column sums.
Note that, as the matrix E is skew symmetric, we have ai = −li for each
counterparty i. However, in real world markets, there is often very little data
available to the public, and the only consistently available data are gross no-
tional amounts for each counterparty. Thus we must impose further structure
to calculate a, l, and counterparty exposures. This structure is the basis of our
bilateral market model.
With the bilateral model, we calculate net positions given gross notional for
each counterparty. This is accomplished using methodology outlined by Gandy

8
and Veraart: first, we sample net notional from a normal distribution, then we
find an initial feasible network, finally, we use Gibbs sampling to converge to
our target distribution (Gandy, 2016). To this bilateral market, we can apply
central clearing. For central clearing, we define two models: a market with a
single central clearing party, and a market with multiple central clearing parties.
The following sections outline the exact methodologies used.

2.1 Bilateral Market Model

As we often only have gross notional amounts for the counterparties in


a given market, we will model the entire market from this data alone. While
net notional is equivalent to net assets ai as defined above, gross notional is
equivalent to gross assets, i.e. for counterparty i, the gross notional gi is

n
X
gi = |eij |.
j=1

While we do not have data on each counterparties individual assets and


liabilities, (i.e. each entry in a given row of matrix E), the ratio of net to gross
notional amounts is known. For a given counterparty i, we define the assets e+
i

and liabilities e−
i values as

gi + ai gi − ai
e+
i = e−
i = .
2 2

Note that e+
i and ei are vectors containing the positive and negative values

of ei , respectively. Thus ei = e+
i − ei . However, to compute this, we need to

estimate ai , as the net asset data is not available. To do this, we must define
an initial feasible network. First, we define an Erdös-Rényi graph, using assets
and liabilities to constrain the market. We then apply the Edmonds-Karp max
flow algorithm to this graph. Now we have our initial feasible network. Finally,
we use Gibbs sampling, a type of Monte Carlo Markov Chain, on the initial
network to build a chain of networks that converge to the assumed distribution
of our market.

9
2.2 Single Central Clearing Party Model

CCPs are simply middle men in a financial market. Thus, modeling a


network with a single CCP is just a restructuring of the adjacency matrix for the
market. Figure 5 demonstrates the case of adding a single CCP to the market
in figure 4.

Figure 5: A centrally cleared version of the market in figure 4

To restructure an arbitrary market, we do the following: First, we define an


(n + 1) × (n + 1) matrix where n is the number of counterparties in the original
market. Now, for each entry i in the first row of our new matrix, we look at
the sum of column i − 1 in the original matrix, with the first entry being zero.
Similarly, for the first column, we look at the sum of row i − 1 in the original
matrix. All other entries of the new matrix are zero.
Notice that the restructured structured market is simply the net trades in
or out of a given counterparty i − 1. Thus the exposures for each counterparty
remain the same in both markets.

10
2.3 Multiple Central Clearing Party Market

In real world markets, there are often more than one CCP for a single
type of derivative asset class. Thus, we will look into adding multiple CCPs to
a market with and without the various types of compression.
Similar to the single CCP case, the multi-CCP case is a restructuring of the
original market. In this case, we create an (n+c)×(n+c) matrix, where c is the
number of CCPs in the market. Now we populate the entries in the first c rows.
To get the first c entries in column i, we sum the entries of column (i − c) in
the original matrix, with the first c entries in the new matrix being zero. Note
that this sum is equal to the exposure between i and a single CCP. In this case,
we take this exposure, and scale it by the proportion i is exposed to each CCP,
i.e. we distribute the net exposure of i among each CCP. Each entry is then the
given proportion of the total sum for column i − c. We do likewise for the rows.
The first c entries of a given row i, are the sum of row (i − c) in the original
matrix. Then a normal distribution is sampled, and the c entries are populated
with the given proportion of the row sum. The rest of the matrix is populated
with zeros.

3 Trade Compression Overview


There are many competing models of trade compression, this paper is
based on the model proposed by D’Errico and Roukny. Here we define a market
as a graph G(N,E), where N is the set of counterparties, and E is the set of trades,
and compression is an operation c : G → G∗ where G∗ = (N, E ∗ ) := c(N, E)
satisfies
a∗i = ai and gi∗ ≤ gi ∀i ∈ N

(note that at least one inequality must be strict). Thus compression keeps
net positions, or assets, constant, and reconfigures edges such that the gross
position is decreased for at least one counterparty. In order to optimally apply
compression to a market, we will further define the market itself; in a market, a

11
counterparty is defined as a dealer if they are both buying and selling, otherwise
they are defined as a customer. A market can be partitioned into two subsets,
Gd and Gc where Gd = (N, E d ), Gc = (N, E c ), E d ∩ E c = ∅ and E d ∪ E c = E.
In order to analyze the efficiency of various methods of compression, we
compare the decrease in the value of positive trades. Note that this decrease
is bounded by the net positions for each counterparty. The difference between
the value of positive trades and net value for each counterparty is defined as the
excess. Thus, for a given market G, the excess ∆(G) is defined as
P P Pn
i∈N j∈N |eij | − j=1 |eij |
∆(G) = .
2

Trade compression always reduces excess in a market (D’Errico, 2017).


We now define four types of compression that can be applied to the above
system. The methods are differentiated by aij and bij , the upper and lower
bounds, respectively, for trade volume between institutions i and j.

Definition 1. We define the following types of loop compression algorithms for


use on a market:

• Conservative: aij = 0 and bij = eij

• Nonconservative: aij = 0 and bij = ∞

• Hybrid: aij = 0 and bij = eij ∀i, j ∈ E C


aij = 0 and bij = ∞ ∀i, j ∈ E D

• Bilateral: aij = bij = max{eij − eji , 0}

As non-conservative compression has no upper bound on edge weights, we


can always find a solution that results in no excess (D’Errico, 2017), thus the net-
work is maximally compressed. Conversely, conservative compression is bound
by the initial trade amount, and thus cannot always reduce all excess. Hybrid
compression combines the above two methods by being conservative on cus-
tomers and non-conservative on dealers. Finally, bilateral compression looks at
each bilateral trade and conservatively compresses the loop between the two

12
counterparties. Therefore, the efficiency of each operation, defined by the re-
duction in excess, is as follows:

∆(G)bilateral ≥ ∆(G)conservative ≥ ∆(G)hybrid ≥ ∆(G)non−conservative .

While it would be optimal to apply nonconservative compression, the complexity


of real world markets often prevents this. Thus the standard for compression
services is a more conservative approach.
Note that nonconservative compression contains conservative as a subset,
and thus it is possible for both methods to result in the same compressed market.

4 Modeling Overview
In this section, we will outline the exact models for compression and
risk analysis in the aforementioned market structures. For our trade compres-
sion algorithms, we will implement non-conservative, hybrid, and conservative
compression. For conservative and hybrid compression, we will be using the
network simplex method, as outlined in D’Errico and Roukny. The network
simplex is simply a minimum-flow algorithm. In this case, we define node po-
sitions and trade bounds to constrain the network, then we apply the network
simplex to find the minimum flow that allows for our network to be feasible.
Non-conservative compression will use L1 matrix minimization as an equivalent
algorithm to network compression.
To measure risk levels in each market structure, we will apply the interbank
contagion model proposed in Eisenberg and Noe. As this model simulates de-
fault at a single counterparty, the model will be applied to each counterparty
in the market, and the average risk will be calculated over all cases. We simu-
late a trigger at each node individually with a shock. For simplicity, the shock
will completely wipe out the triggering counterparty. As the relation between
clearing and compression is not well understood, any CCPs in the market will
be ignored in the triggering step, and will be regarded as unable to default.
To check if a bank defaults, we look to their reserve levels to see if there

13
is sufficient capital to avoid a default. Reserve levels are taken from Federal
Reserve data on the top 25 U.S. banks. To calculate reserves, we take the
consolidated assets, normalize to find the ratio of assets for each bank, and then
multiply by the total level of net assets for all U.S. banks.
In the CCP case, we assume ”cover two” default model, where each CCP
holds enough in reserve to cover the larger of either the largest exposure, or
the sum of the second two largest exposures. In addition, we add a buffer to
account for surplus reserve levels.

4.1 Compression Models

According to Roukny, conservative compression is the most commonly


implemented form of compression, with most real world algorithms based on
conservative models (personal communication, Oct 30, 2017). Conservative
compression looks to minimize excess while using current trade levels as an
upper bound for changes on the network. Thus, we can reformulate conserva-
tive compression as finding the minimum cost flow in the network. Thus an
optimal solution to conservative compression can be found using the network
simplex algorithm (Appendix B.2). In the conservative case, we define node
demand to be the net position for each counterparty, then we use trade levels
to bound the maximum flow an edge con support, thus the problem is equiva-
lent to conservative compression. For details on how network simplex, and how
provides an optimal solution to conservative compression, we refer to D’Errico
and Roukny, Appendix E.2.
In addition to conservative compression, the network simplex method can
be applied to hybrid compression as well. As hybrid compression is conservative
over customers and nonconservative over dealers, we bound the flow to and from
customers by the trade level, while giving no bound to inter-dealer flow, i.e. the
maximal flow, or trade, between two dealers is unconstrained.
Non-conservative compression, in contrast to conservative and hybrid com-
pression, is boundless over the entire market. Thus, there exist many algorithms

14
for non-conservative compression. In this paper, we propose the use of L1 matrix
minimization as a form of non-conservative compression.

Definition 2. L1 minimization is a constrained optimization problem that min-


imizes the sum
n X
X n n X
X n

|eij | = e+
ij + eij
i=1 j=1 i=1 j=1
+ −
where E − E = E, subject to
n
X

(e+
ij − eij ) = ai , eij = −eji ∀i, j ∈ N.
j=1

Note that E is skew-symmetric, so we have

n
X n
X
ai = eij = eij = −li .
j=1 j=1

In order to determine if a given matrix is L1 minimal, we define an the


following optimality check.

Lemma 1. If eij = 0, or sgn(eij ) = sgn(ai ) and sgn(eij ) = sgn(lj ) ∀i, j ∈ N


then E is L1 minimal.

Proof. See Appendix A.2

Now that we have defined an optimality check for L1 minimization, we will


prove that any matrix can be compressed to be L1 minimal.

Theorem 2. Any market can be compressed to be L1 minimal.

In addition to always finding a feasible solution, we will now prove that the
solution to L1 minimization is a form of non-conservative compression.

Corollary. After applying L1 minimization to a market, every participant in


the market becomes a customer.

Proof. If every institution has either positive or negative trades, they are ei-
ther only buying or only selling. By definition, this makes every institution a
customer.

15
Corollary. Minimizing the L1 norm of the adjacency matrix E of a market
eliminates all excess in the underlying market

Proof. Any market where all participants are customers has 0 excess (D’Errico,
2017). Thus, as L1 minimization results in a market with only customers, it
eliminates all excess.

Thus we use L1 minimization as it is equivalent to nonconservative compres-


sion and a feasible solution can always be found.

4.2 Risk Propagation Model

To get an accurate measure of the effects of default on a given financial


market, we will simulate a default of each counterparty in the market, and take
the average result of all cases. The following is a model for market value lost
after the default of an arbitrary counterparty:
Given a financial market G = (N, E), we define the vector r as the ’capital
reserves’ of each counterparty in the network (this reflects the counterparty’s
absorption capacity). Now we simulate the default of an arbitrary i in N :
First, define the set Γ to be the set of all counterparties that have defaulted,
note that initially, Γ1 = {i}. The default is triggered by wiping out all assets of
i. This means the reserves of i are depleted, and all incoming trades are needed
to pay off additional debts. Thus each j adjacent to i receives 0 on any assets
from i. We now define the updated matrix of trades E 1 as follows:

0

eij > 0
E1 = ∀j ∈ N, i fixed.
eij eij ≤ 0

Thus E 1 now represents the total amount traded after taking into account the
default of i. We also define r1 as:

/ Γ1
r(i) i ∈

r1 = .
0 i ∈ Γ1

16
Now, to determine if this causes j to default, we calculate p1j , the net position
of j:
X
p1j = rj1 + e1kj .
k∈N

Thus j defaults if p1j < 0. Now we define Γ2 as Γ2 = Γ1 {j ∈ N |p1j < 0}. We


S

now define E 2 for all j in Γ2 :



1 1
 lj +p
 j 1
ejk e1jk > 0, k ∈ Γ2
2 lj1
E = .
e 1

otherwise
jk

Note that if a trade exists between two defaulted counterparties, say j to k, k


will collect on the trade, despite the default of j, to mitigate losses. We also
define 
r1 (i) i ∈

/ Γ2
r2 = .
0 i ∈ Γ2

Now look at all counterparties k adjacent to all new j in Γ3 . Calculate each


p3k and we define Γ3 = Γ2 {k ∈ N |p2k < 0}. Now calculate E 3 similar to E 2 .
S

Continue the above until no new counterparties default, we will call this step ∗.
At this point either all counterparties have defaulted, or the parties remaining
are resistant to default.
Now we calculate the loss of value due to the default of the triggering coun-
terparty i:
X X
V Li = (rj − rj∗ ) + (eij − e∗ij ).
j∈N i,j

This takes into account the loss to capital reserves, as well as value lost on assets
traded. After repeat the above algorithm for all i. Now we compute the average
value lost as follows:
N
1 X
L= V Li .
N i=1

17
5 Results
The data used in the base bilateral market model are the gross notional
amounts from the fourth quarter of 2016 for the top 25 commercial banks,
savings associations, and trust companies in the United States (Comptroller,
2017). This data contains the gross notional for three asset classes: Interest
Rate Swaps, Foreign Exchange, and Credit Derivatives. In addition, the surplus
reverse for the CCP case were calculated using EU stress tests data; the average
CCP held a 13% surplus on required capital, thus each CCP’s reserve levels
have a multiplier of 1.13 (ESMA, 2018).
Using the aforementioned gross notional data with the bilateral market
model, we create a chain of 10,000 bilateral trading networks for each asset
class. For the Gibbs sampling, we want the output to start when the data
has already converged, thus we define a burn-in period of 1,000,000 samples.
Furthermore, we do not want successive markets to be correlated, so we thin
the data by sampling between each market. The thinning step used was 10,000
samples, to ensure little correlation between subsequent markets in the chain.
For each market in the aforementioned chain, we apply conservative, hy-
brid, and non-conservative compression algorithms, resulting in three additional
chains, one for each compression method. Then, for each market in each chain,
we restructure the market to account for all CCPs in the given asset class, re-
sulting in a four additional chains. Thus, for each of the three asset class used,
we are left with eight chains: four bilateral, and four cleared. We apply the risk
propagation model to each network in all 24 chains. Table 1 shows the average
result for each chain.

18
Table 1: Value lost calculated from risk propagation model in each market. Average
of results from 10,000 market chain (Loss in millions $)

IRS FX Credit
Compression Method Bilateral Cleared Bilateral Cleared Bilateral Cleared
Base Market 3,441.92 1,638.32 1,365.75 496.49 675.77 278.57
Conservative 3,282.71 1,638.32 1,222.33 496.49 614.13 278.57
Hybrid 1,719.35 1,638.32 592.06 496.49 360.48 278.57
Non-conservative 1,687.42 1,638.32 588.31 496.49 341.85 278.57

The graphs in figures 6 through 11 show the risk results for each of the 10,000
networks in a given chain. The averages shown in table 1 were calculated using
the data in figures 6 through 11.

19
Figure 6: Bilateral IRS market loss after triggering default (title market in blue,
other markets in gray), and the distribution of losses

20
Figure 7: Centrally cleared IRS market Loss after triggering default (title market in
blue, other markets in gray), and the distribution of results

21
Figure 8: Bilateral Forex market loss after triggering default (title market in blue,
other markets in gray), and the distribution of losses

22
Figure 9: Centrally cleared Forex market Loss after triggering default (title market
in blue, other markets in gray), and the distribution of losses

23
Figure 10: Bilateral Credit market loss after triggering default (title market in blue,
other markets in gray), and the distribution of losses

24
Figure 11: Centrally cleared Credit market Loss after triggering default (title market
in blue, other markets in gray), and the distribution of losses

25
6 Conclusions
For each asset class, compression had no effect on the loss in the cleared
market. Thus, it is apparent that compression does not make a difference in
cleared markets. A possible explanation for this could be that central clearing
results in a restructuring of the market based on net positions. As compression
does not change net positions, it is obvious that compression will not change
the structure of a market after clearing. In addition, the model used did not
take into account cross-asset class netting, and the data is a ”snapshot” of the
market where time to maturity is disregarded. The effects of either of these
cases on results is unknown.
In all three asset classes, we see similar levels of reduction in loss for bi-
lateral, compression, and clearing. Furthermore, between the different types of
compression, we again see similar levels of reduction for conservative, hybrid,
and non-conservative methods. Table 3 shows these differences in relation to
the level of loss seen in the bilateral market. Note that central clearing is gen-
eralized to a single case for each asset class, as compression did not effect the
value lost.

Table 2: The difference in average value lost compared to the base (bilateral) market

The reason that conservative compression resulted in only a marginal re-


duction is due to the fact that conservative compression only allows for the
elimination of closed chains of intermediation, which are not always present,
thus conservative compression is not always possible. Non-conservative com-

26
pression, on the other hand, is always possible, and reduces gross positions by
as much as possible, thus, we see more of a difference between non-conservative
and the bilateral market.
Interestingly, hybrid compression offered a reduction similar to non-conservative.
There are several possible explanations for this. First, it is possible that the
simulated markets did not have many customers, and thus hybrid compression
mostly dealt with non-conservative compression over many dealers. Another
possibility is dealer exposures are more important than client exposures, and
thus the non-conservative compression over dealers was key to preventing de-
fault. The data do not strongly support either case, as most markets average a
50/50 split between dealers and customer.
As all forms of compression reduced value lost compared to the base market,
with non-conservative resulting in a reduction comparable to clearing, we con-
clude that a reduction in exposure may correlate with a reduction in default risk.
Additionally, as cleared markets had equal loss over all forms of compression,
it would seem that compression is not necessary in those markets. However,
the interaction between compression and clearing is not well understood. Real
world markets are far more complex than the models used, and the addition of
model complexity in the form of cross-asset netting or time to maturity might
have effects on value lost, but this is not in the scope of this project.

27
7 References
1. Culp, Christopher L.(2010). OTC-Cleared Derivatives: Benefits, Costs,
and Implications of the Dodd-Frank Wall Street Reform and Consumer
Protection Act. Journal of Applied Finance, 1 (2) Available at http:
//www.rmcsinc.com/articles/OTCCleared.pdf

2. D’Errico, Marco, Roukny, Tarik. (May 23, 2017). Compressing over-the-


counter markets. Preprint. Available at https://arxiv.org/pdf/1705.
07155.pdf

3. Eisenberg, Larry, Noe, Thomas H. ”Systemic Risk in Financial Systems”.


Management Science. 47(2), February 2, 2001, pp. 236-249.

4. European Securities and Markets Authority (February 2, 2018). ”EU-wide


CCP Stress Test 2017”. Available at http://firds.esma.europa.eu/
webst/ESMA70-151-1154%20EU-wide%20CCP%20Stress%20Test%202017%
20Report.pdf

5. Gandy, A., Veraart, L.A.M. A Bayesian methodology for systemic risk


assessment in financial networks., May 2016. Availible at http://ssrn.
com/abstract=2580869

6. London Clearing House (February 23, 2012). ”TriOptima and LCH.Clearnet


Compression of Cleared Interest Rate Swaps Exceeds $100 Trillion in No-
tional; $20.4 Trillion Compressed in 2012 Alone”. Available at http://
www.swapclear.com/knowledge/news/press-releases/2012-02-23.html

7. Office of the Comptroller of the Currency. ”Quarterly report on bank


trading activities: Fourth quarter 2016.” BIS Quarterly Review, 3 2017.

8. O’Kane, Dominic. (March, 2014). Optimising the Compression Cycle:


Algorithms for Multilateral Netting in OTC Derivatives Markets. EDHEC
Business School.

28
9. TriOptima. (2017). triReduce Portfolio Compression Fact-sheet. Avail-
able at https://www.trioptima.com/media/filer_public/31/f1/31f1\
4682-5137-4ef1-80af-e8bb09835dce/trireduce_general_factsheet.
pdf

10. Wellen, Natalie. (May 1, 2017). Modeling Over-the-Counter Deriva-


tive Trading with and without Central Clearing Parties. MQP Report.
Worcester Polytechnic Institute. Available at https://web.wpi.edu/
Pubs/E-project/Available/E-project-050117-032155/

29
A Proofs

A.1 Lemma 1

Proof. Let E be an L1 minimal matrix such that row r has values era > 0 and
erb < 0. Then
X
|E|1 = |eij | ≥ |era | + |erb |
i,j∈N

but for row r, ar = era + erb < |era | + |erb |. Thus we have
X
|E|1 > |ai |
i∈N

However, ai is the lower bound for each row i, so |E|1 is greater than the
minimum bound, meaning E is not L1 minimal, and we have a contradiction.
Thus, if E is L1 minimal, then eij = 0, or sgn(eij ) = sgn(ai ) and sgn(eij ) =
sgn(gj ) ∀i, j ∈ N .

A.2 Theorem 2

Proof. Let E be the adjacency matrix associated with a given market, and a
the vector of assets. If a = ~0, then E can be redefined as a matrix of 0s and we
are done. If a 6= ~0, then there exists an ai in a such that ai 6= 0. We know that
X
ai = 0
i∈N

Thus there exists some aj ≤ −ai . Now we partition the vector a into a+ =
{ai ∈ a|ai < 0}, and a− = {ai ∈ a|ai > 0}. Similarly we partition l into

l+ = {li ∈ l|li < 0}, and l− = {li ∈ l|li > 0}. Note that a+
i = −li and

a− +
i = −li for all i in N .

Let r ∈ a+ be given, there exist c ∈ a− such that ac ≤ −ar . Now we


construct an optimal. skew-symmetric matrix E ∗ by populating row r of E ∗
with values such that E ∗ is L1 minimal. Clearly r 6= c, and sgn(erc ) = 1 =
sgn(lc ), so we set e∗rc ≤ ar , and as E ∗ is skew symmetric, e∗cr ≤ −ar = lr . If
ar ≤ lc then we set e∗rc = ar and the row is optimal. Else, we set e∗rc = lr , and as

30
we know there exists j such that aj ≤ −ai we find other entries in l to populate
the rest of the row similarly. Now let another row in a+ be given. This row can
be populated similarly, with the exception that now lc is bounded by lc − ar .
Continue this for all rows in l+ . As we have not used negative values yet, the
same operation can work for all rows in a− . Finally, for any rows not in a+ or
a− , the value of ai here is clearly 0, and thus the row can be populated with 0.
By definition, our constructed E ∗ contains only values that pass the optimality
check in lemma 1. As E ∗ was defined using an arbitrary matrix E, it is always
possible to compress a network to be L1 minimal.

31
B Pseudocode for Algorithms
B.1 Network Simplex

Algorithm 1: Network Simplex for trade Compression


Input : Original market G = (N, E), set of risk tolerances
Output: G∗ such that x0 is minimized
1 begin
2 start with an initial tree structure E T = (T, L, U );
3 compute total notional x0 , reduced cost and node potentials;
4 while there exists some arc ∈ / E T that violates optimality conditions
do
5 choose an edge (i, j) that violates conditions;
6 add (i, j) to E 0 and select the leaving edge (k, l);
7 update E 0 , x0 and node potentials;
8 end
9 end

B.2 Non-conservative Compression (L1 Minimization)

Algorithm 2: A deterministic conservative compression algorithm


Input : Original market G = (N, E)
Output: G∗ such that ∆(G∗ ) ≤ ∆(G) and E ∗ ⊂ E
1 begin
Pn
2 Calculate assets ai = j=1 eij , and liabilities li = −ai ;
3 for each row in E do
4 for each entry j in rowi do
5 if sgn(lj ) = sgn(ai ) and
Psgn(l j ) = 1 then
∗ n
6 Eij = min(lj , [ai − j=1 e∗ij ])
7 end
8 else if sgn(lj ) = sgn(aPi ) and sgn(lj ) = −1 then
∗ n
9 Eij = max(lj , [ai − j=1 e∗ij ])
10 end
11 end
12 end
13 end

32
B.3 Risk Propagation

Algorithm 3: Algorithm for applying and analyzing a shock to a given


market
Input : Original market G = (N, E), reserve vector r
Output: The average loss of value L
1 begin
2 for i ∈ N do
3 Shock i;
4 Add i to Γ;
5 Update eij ∀j and ci ;
6 while New i are added to Γ do
7 for j adjacent to i ∈ Γ do
8 Calculate pj ;
9 if pj < 0 then
10 Add j to Γ
11 end
12 end
13 Update E, r ∀i ∈ Γ ;
14 end P
V Li = i∈Γ (ri − ri∗ ) + i,j (eij − e∗ij );
P
15
16 Γ = ∅;
17 end
PN
18 L = N1 i=1 V Li
19 end

33

You might also like