0% found this document useful (0 votes)
39 views16 pages

Lecture 04 - Robust Estimates of The VCV Matrix

Uploaded by

k60.2112343051
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views16 pages

Lecture 04 - Robust Estimates of The VCV Matrix

Uploaded by

k60.2112343051
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

4 Robust Estimates of the VCV

Matrix

The Curse of Dimensionality

How many parameters do we need to estimate for a


portfolio of 100 stocks?

2
Challenges in Estimating the VCV Matrix
• Increased Sample Size Requirement:
• To reliably estimate the covariance between
variables, the number of observations must
increase exponentially with the number of
variables.
• Estimation Error:
• As dimensionality increases, sample
covariance matrices become less stable,
leading to larger estimation errors.

Extreme Example 1:
No model risk, high sample risk
Extreme Example 2:
High model risk, no sample risk

Extreme Example 2:
High model risk, no sample risk
The Curse of Dimensionality
In the presence of large portfolios: The
number of parameters is often larger than the
sample size.

Increasing frequency: Increasing frequency is


necessary but not always sufficient.

Introducing structure: Introducing structure


helps deal with sample risk, but this comes at
the cost of model risk.
7

Factor – Based VCV Estimate


Factor – Based VCV Estimate
 With two factor model:

Factor – Based VCV Estimate


Factor – Based VCV Estimate
 Using a factor model is a convenient way
to reduce the number of risk parameters to
estimate while introducing a reasonable amount
of model risk.
 An implicit factor model is often preferred
since it lets the data tell us what the relevant
factors are, thus alleviating model risk.
 PCA analysis

Honey I Shrunk the Covariance Matrix!


Shrinkage Approach

 Trade-off between Sample risk and Model risk

 Shrinkage: Do not choose between sample risk


and model risk but instead mixing sample risk
and model risk.
Honey I Shrunk the Covariance Matrix!
Shrinkage Approach

Honey I Shrunk the Covariance Matrix!


Shrinkage Approach
Portfolio Construction with Time-Varying
Risk Parameters

Rolling Windows
 The ‘historical’ covariance matrix is calculated on a T-day
window that is rolled through time, each day adding the new
return and taking off the oldest return:
1 T
Ht   rt i rt'i .
T i 1
 The sophistication of this model lies in the choice of the
window length T. If the length is short, the estimate may be
noisy. The longer the window, the less noisy the estimate, but
the more biased it is when far more distant observations,
which may not be relevant today, are included in the
calculation. Hence, the length of the window T directly
determines the trade-off between the sampling error and the
unbiasedness of the estimate.
The Curse of Non-Stationarity
Increasing frequency is better
than increasing sample period in case of non-stationary
return distributions.

In this context, using rolling windows is better than


expanding windows.

The EWMA Model


The EWMA Model

The EWMA model


 The EWMA covariance matrix has the following
specification:
The EWMA model
The ARCH and GARCH model
 Observing that squared residuals are often
autocorrelated even though residuals themselves are
not, Engle (1982) sets the stage for the new class of
time-varying conditional volatility models with the
Autoregressive Conditional Heteroskedasticity
(ARCH) model. The new model has inspired a huge
amount of related research on its development,
generalisation and application, and deserved Engle a
Nobel Prize in Economics in 2003

The ARCH model


i

The ARCH model


 The ARCH model is thus able to capture the volatility
clustering observed in asset returns. One advantage of
the ARCH model is that the weight  can be estimated
from historical data, based on, e.g., the Maximum
Likelihood procedure, even though the ‘true’ volatility
is never observed.
 The ARCH model captures the two most common
features of real high frequency financial asset returns,
i.e., volatility clustering and heavy-tailed
unconditional distributions.

The GARCH model


 In the ARCH(p) model, past shocks of more than p
periods ago have no effect on the current volatility,
hence the order p determines how long a shock is
persistent to volatility. For financial time series, it
typically requires a very high order p to capture the
dependence. Bollerslev (1986) proposes a
parsimonious way to handle with this problem,
introducing the Generalised Autoregressive
Conditional Heteroskedasticity (GARCH) model.
The GARCH model

The GARCH model


The GARCH model


 It obviously follows that the GARCH model is also an
exponentially weighted moving average process. However,
there are two major differences between the GARCH and
the EWMA models. First, while the parameter  of the
EWMA process is often set ad hoc, the parameters of the
GARCH process have to be estimated by rigorously
statistical methods, normally using the Maximum
Likelihood procedure. Second, the GARCH model allows
the volatility process to eventually revert to its long-run
level.

The Multivariate GARCH model


 Orthogonal GARCH
 The DCC model
LAB Section:
 Use the file:
lab_22_ShrinkageVCV.ipynb

31

You might also like