Meucci 2011 - The Prayer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

"The Prayer"

Ten-Step Checklist for


Advanced Risk and Portfolio Management
Attilio Meucci1
[email protected]
published2 : April 1 2011
this revision: May 12 2011
latest revision available at http://symmys.com/node/63
Abstract
We present "the Prayer", a recipe of ten sequential steps for all portfolio managers, risk managers, algorithmic traders across all asset classes and all investment horizons, to model and manage the P&L distribution of their positions.
For each of the ten steps of the Prayer, we introduce all the key concepts
with precise notation; we illustrate the key concepts by means of a simple case
study that can be handled with analytical formulas; we point the readers toward
multiple advanced approaches to address the non-trivial practical problems of
real-life risk modeling; and we highlight a non-exhaustive list of common pitfalls.

JEL Classification: C1, G11


Keywords: Quest for Invariance, conservation law of money, estimation, projection, pricing, aggregation, attribution, evaluation, optimization, invariants,
risk drivers, random walk, Levy process, autocorrelation, long memory, volatility clustering, non-parametric, Monte Carlo, panic copula, elliptical maximum
likelihood, robustness, influence function, breakdown point, Bayesian, Fourier
transform, full repricing, theta, delta, gamma, vega, carry, duration, convexity,
liquidity, holding, portfolio weight, return, leverage, attribution, Factors on Demand, satisfaction, VaR, CVaR, Sharpe ratio, diversification, eective number
of bets, utility, homogeneous risk, quadratic programming, cone programming,
grid search, integer programming, combinatorial heuristics, Black-Litterman,
entropy pooling, drawdown, dynamic programming, algorithmic trading, optimal execution, performance analysis, slippage, market impact, geometric linkage.

1 The author is grateful to Garli Beibi, Arlen Khodadadi, Luca Spampinato, and an anonymous referee.
2 This article appears as Meucci, A. (2011), The Prayer: Ten-Step Checklist for Advanced
Risk and Portfolio Management - The Quant Classroom by Attilio Meucci - GARP Risk
Professional, April/June 2011, p. 54-60/34-41

Contents
Introduction

P1 Quest for Invariance

P2 Estimation

P3 Projection

P4 Pricing

10

P5 Aggregation

13

P6 Attribution

15

P7 Evaluation

17

P8 Optimization

20

P9 Execution

22

P10 Ex-Post Analysis

23

References

25

A Appendix

29

Introduction
The quantitative investment arena is populated by dierent players: portfolio
managers, risk managers, algorithmic traders, etc. These players are further
dierentiated by the asset classes they cover, the dierent time horizons of
their activities and a variety of other distinguishing features. Despite the many
dierences, all the above "quants" are united by the common goal of correctly
modeling and managing the probability distribution of the prospective P&L of
their positions.
Here we present "the Prayer", a blueprint of ten sequential steps for quants
across the board to achieve their common goal, see Figure 1. By following the
Prayer, quants can avoid common pitfalls and ensure that they are not missing
important points in their models. Furthermore, quants are directed to areas of
advanced research that extends beyond the traditional quant literature. We use
the letter "P" to signify the true probability space of the buy-side P&L, which
stands in contrast to the risk-neutral probability space "Q" used on the sell-side
to price derivatives, see Meucci (2011b).
risk drivers

invariants

1: Quest for Invariance


invariants time series

invariants distribution

2: Estimation
invariants distribution

Estimation
Risk

risk drivers distribution

3: Projection
risk drivers distribution

securities P&L distribution

4: Pricing
securities P&L / portfolio positions

Liquidity
Risk

portfolio P&L distribution

5: Aggregation
portfolio P&L distribution

factors distribution / exposures

6: Attribution
factors distribution / exposures

summary statistics / decomposition

7: Evaluation
satisfaction / constraints

optimal positions

8: Optimization
target / market info / book info

execution prices

9: Execution
realized P&L

allocation, selection, curve,

10: Ex-Post Analysis

Figure 1: The "Prayer": ten-step blueprint for risk and portfolio management
Below we discuss the ten steps of the Prayer. Each step is concisely encapsulated into a definition with the required rigorous notation. Then a simple case

study with a portfolio of only stocks and call options illustrates the steps with
analytical solutions. Within each step, we prepare the ground for, and point
to, advanced research that fine-tunes the models, or enhances the models flexibility, or captures more realistic and nuanced empirical features. Each of these
steps are deceptively simple at first glance. Hence, we highlight a few common
pitfalls to further clarify the conceptual framework.

P1

Quest for Invariance

The "quest for invariance" is the first step of the Prayer, and the foundation
of risk modeling. The quest for invariance is necessary for the practitioners to
learn about the future by observing the past in a stochastic environment.
Key concept. The Quest for Invariance step is the process of extracting
from the available market data the "invariants", i.e. those patterns that
repeat themselves identically and independently (i.i.d.) across time. The
quest for invariance consists of two sub-steps: identification of the risk
drivers and extraction of the invariants from the risk drivers.
The first step of the quest for invariance is to identify for each security the
risk drivers among the market variables.
Key concept. The risk drivers of a given security are a set of random
variables
Yt (Yt,1 , . . . , Yt,D )0
(1)
that satisfy the following two properties: a) the risk drivers Yt , together with
the security terms and conditions, completely specify the security price at
any given time t; b) the risk drivers Yt , although not i.i.d., follow a stochastic
process that is homogeneous across time, in that it is impossible to ascertain
the sequential order of the realizations of the risk drivers from the study of
the risk drivers past time series {yt }t=1,...,T .
The risk drivers are variables that fully determine the price of a security, but
in general they are not the price, because the price can be non-homogeneous
across time: think for instance of a zero-coupon bond, whose price converges to
the face value as the maturity approaches.
Homogeneity ensures that we can apply statistical techniques to the observed
time series of the risk drivers {yt }t=1,...,T and project future distributions. Note
that we use the standard convention where lower-case letters such as yt denote realized variables, whereas upper-case letters such as Yt denote random
variables.

Illustration. Consider first the asset class of stocks. Denote by St


the random price of one stock at the generic time t. The log-price of the
stock Yt ln St , possibly adjusted by reinvesting the dividends, is not i.i.d.
across time. However, the dynamics of the stock log-price is homogeneous
across time: it is not possible to isolate any special period in the stocks
future evolution that will distinguish its price pattern from a nearby period.
Hence, to project into the future, the random variable Yt ln St is a suitable
candidate risk driver for the stock price St .
Next, consider a second asset class, namely stock options. Denote by
Ct,k,e the random price of a European call option on the stock, where k is a
given strike and e is the given expiry date, or time of expiry. The call price,
or its log-price, is not a risk driver, because the presence of the expiry date
breaks the time homogeneity in the statistical behavior of the call option
price.
In order to identify the risk drivers behind the call option, we transform
the price into an equivalent, but homogeneous, variable, namely the implied
volatility at a given time to expiry. More precisely, consider the Black-Scholes
pricing formula
Ct,k,e cBS (ln St ln k, t , t ) ,
(2)
where t e t is the time to expiry, t is the yet to be defined implied
volatility for that time to expiry, and cBS is the Black-Scholes formula
cBS (m, , )

em m + r + 2 /2
m + r 2 /2

(
) er (
),
k

(3)

with the standard normal cdf. At each time t, the price Ct,k,e in (2) is
observable, and so are St and t . Therefore, the option formula (2) implies
a value for t , which for this reason is called implied volatility.
The implied volatility for a given time to expiry, or better, the logarithm
of the implied volatility ln t , displays a homogeneous behavior through time
and thus it is a good candidate risk driver for the option. From the option
formula (2) we observe that the implied volatility alone is not sucient to
determine the call price in the future, as, in addition, the log-price ln St and
the time to expiry t are needed. Since the time to expiry is deterministic,
the call option requires two risk drivers to fully determine its price

ln St
Ys,t

.
(4)
Y,t
ln t
The second step of the quest for invariance is the extraction of the invariants,
i.e. the repeated patterns, from the homogeneous series of the risk drivers.

Key concept. The invariants are shocks that steer the stochastic
process of the risk drivers Yt over a given time step t t + 1.
tt+1 (1,tt+1 , . . . , Q,tt+1 )0

(5)

The invariants satisfy the following two properties: a) they are identically
and independently distributed (i.i.d.) across dierent time steps; b) they
become known at the end of the step, i.e. at time t + 1.
Note that each of the D risk drivers (1) can be steered by one or more
invariants, therefore Q D.
To determine whether a variable is i.i.d. across time, the easiest test is to
scatter-plot the series of the variable versus its own lags. If the scatter-plot,
or better, its location-dispersion ellipsoid, is a circle, then the variable is a
good candidate for an invariant. For more on this and related tests see Meucci
(2005a).
Being able to identify the invariants that steer the dynamics of the risk
drivers is of crucial importance because it allows us to project the market randomness to the desired investment horizon. Often, practitioners make the mistake of projecting variables they have on hand, most notably returns, instead
of the invariants. This, of course, leads to incorrect measurement of risk at the
horizon, and thus to suboptimal trading decisions.
The stochastic process for the risk drivers Yt is steered by the randomness
of the invariants tt+1 . The most basic dynamics, yet the most statistically
robust, which connects the invariants tt+1 with the risk drivers Yt is the
random walk
Yt+1 = Yt + tt+1 .
(6)
More advanced processes for the risk drivers account for such features as autocorrelations, stochastic volatility, and long memory. We refer to Meucci (2009b)
for a review of these more general processes and how they related to random
walk and invariants both in discrete and in continuous time, with theory, case
studies, and code. We refer to Meucci (2009c) for the multivariate case, and
how it relates to cointegration and statistical arbitrage.
Illustration. Consider our first asset class example, the stock. As discussed, the only risk driver is the log-price Yt ln St . The above scatter-plot
generally indicates that the compounded return ln (St+1 /St ) are approximately invariants
tt+1 ln St+1 ln St .
(7)
Therefore the risk driver Yt ln St follows a random walk, as in (6).
Now, consider our second asset class, the call option example. The empirical scatter-plot shows that the changes of the log-implied volatility are
approximately i.i.d. across time. Furthermore, our analysis of the stock example (7) implies that the changes of the log-price are invariants. Therefore,
6

using notation similar to (4), we obtain

s,tt+1
ln St+1
ln St

.
,tt+1
ln t+1
ln t

(8)

This is also a random walk as in (6). Notice that this is a multivariate random
walk.
The outcome of the quest for invariance, i.e. the set of risk drivers and their
corresponding invariants, depends on the asset class and on the time scale of
our analysis. For instance, for interest rates a simple random walk assumption
(6) can be viable for time steps of one day, but for time steps of the order of one
year mean-reversion becomes important. Similarly, for stocks at high frequency
steps of the order of fractions of a second, the very time step becomes a random
variable, which calls for its own invariant. We refer to Meucci (2009b) for a
review.
Pitfalls. "...The random walk is a stationary process...". A random walk,
such as Yt in (6) is not stationary. The steps of the random walk tt+1 are
stationary, and actually they are the most stationary of processes, namely invariants.
"...The random walk is too crude an assumption...". Once the data is suitably transformed into risk drivers, the random walk assumption is very hard to
beat in practice, see Meucci (2009b).
"...Returns are invariants ...". Returns are not invariants in general. In our
call option example, the past returns of the call option price do not teach us
anything about the future returns of the option.

P2

Estimation

As highlighted in the Quest for Invariance Step P 1, the stochastic behavior of


the risk drivers is steered by the "invariants". Once the invariants have been
identified, their distribution can be estimated from empirical analysis and from
other sources of information.
Because of the invariance property, the distribution of the invariants does
not depend on the specific time t. We represent this distribution in terms of
its probability density function (pdf) f . Note that, although the invariants
are distributed independently across time, multiple invariants can be correlated
with each other over the same time step. Therefore f needs to be modeled as
a multivariate distribution.
Key concept. The Estimation Step is the process of fitting a distribution f to both the observed past realizations { tt+1 } of the invariants
and optionally additional information iT that is available at the current

time T
{

tt+1 }t=1,...,T

, iT

f .

(9)

Simple estimation approaches only process the time series of the invariants { tt+1 }, but various advanced techniques also process information
iT such as market-implied forward looking quantities, subjective Bayesian
priors, etc.
The simplest of all estimators for the invariants distribution is the nonparametric empirical distribution, justified by the law of large numbers, i.e.
"i.i.d. history repeats itself". The empirical distribution assigns an equal probability 1/T to each of the past observations in the series { t }t=1,...,T of the
historical scenarios.
Alternatively, for the distribution of the invariants, one can make parametric
assumptions such as multivariate normal, elliptical, etc.
Illustration. To illustrate the parametric approach, we consider our
example (8), where the invariants are changes in moneyness and changes
in log-implied volatility from t to t + 1. We can assume that the distribution
f is bivariate normal with 2 1 expectation vector (s , )0 and 2 2
covariance matrix 2 as below

s,tt+1
s
2s
s
N(
,
).
(10)
,tt+1

s
2

P
The expectation can be estimated with the sample mean T1 t t , and
P
the covariance with the sample covariance 2 T1 t ( t ) ( t )0 , where
0
denotes the transpose.
In large multivariate markets it is important to impose structure on the
correlations of the distribution of the invariants f . This is often achieved in
practice by means of linear factor models. Linear factor models are an essential
tool of risk and portfolio management, as they play a key role in the Estimation
Step P 2, as well as, among others, in the Attribution Step P 6 and the Optimization Step P 8. We refer to Meucci (2010h) for a thorough review of theory,
code, empirical results, and pitfalls of linear factor models in these three and
other contexts.
A highly flexible approach to estimate and model distributions is provided by
the copula-marginal decomposition, see e.g. Cherubini, Luciano, and Vecchiato
(2004). In order to use this decomposition in practice, as well as any alternative
outcome of the estimation process, the only feasible option is to represent distributions in terms of historical scenarios similar to the above, or Monte Carlo
generated scenarios, see Meucci (2011a).
An important advanced topic is estimation risk. It is important to emphasize that, regardless how advanced an estimation technique is applied to model
the joint distribution of the invariants, the final outcome will be an estimate, i.e.
8

only an approximation, of the true, unknown, distribution of the invariants f .


Estimation risk is the risk stemming from using an estimate of the invariants
distribution in the process of managing the portfolios positions, instead of the
true, unknown distribution of the invariants.
Estimation risk, which first appears here in the context of the Estimation
Step P 2, aects the cornerstones of risk and portfolio management, namely the
Evaluation Step P 7, the Optimization Step P 8, and the Execution Step P 9. We
will explore in those steps techniques that attempt to address estimation risk,
which include scenario analysis, Fully Flexible Probabilities, robust estimation
and optimization, multivariate Bayesian statistics, etc.
Pitfall. "...In order to estimate the return of a bond I can analyze the time
series of the past bond returns...". The price of bonds with short maturity will
soon converge to its face value. As a result, the returns are not invariants, and
thus their past history is not representative of their future behavior. Estimation
must always be linked to the quest for invariance.
"...In markets with a large number Q of invariants I use a cross-sectional
linear factor model on returns with K Q factors. This reduces the covariance
parameters to be estimated from Q2 /2 to K 2 /2 + Q.". A cross-sectional
factor model has the same number of unknown quantities as a time-series model.
The cross-sectional factors are typically autocorrelated. The residuals in both
cross-sectional and time-series models are not truly idiosyncratic, as they display
non-zero correlation with each other. For more on these and related pitfalls for
linear factor models, see Meucci (2010h).

P3

Projection

Ultimately we are interested in the value of our positions at the investment


horizon. In order to determine the distribution of our positions, we must first
determine the distribution of the risk drivers at the investment horizon. This
distribution, in turn, is obtained by projecting to the horizon the invariants
distribution, obtained in the Estimation Step P 2.
We denote the current time as t T and the generic investment horizon
t T + , where is the distance to the horizon, say, one week.
Key concept. The Projection Step is the process of obtaining the
distribution at the investment horizon T + of the relevant risk drivers
Yt from the distribution of the invariants and additional information iT
available at the current time T
f , iT

fYT + .

(11)

In order to project the risk drivers we must go back to their connection with
the invariants analyzed in the Quest for Invariance Step P 1.
9

If the drivers evolve as a random walk (6), then by recursion of the random
walk definition Yt+2 = Yt+1 + t+1t+2 = Yt + tt+1 + t+1t+2 we obtain
that the risk drivers at the horizon YT + are the current observable value yT
plus the sum of all the intermediate invariants
YT + = yT + T T +1 + + T + 1T + .

(12)

Using the independence of the invariants, (12) yields for the variance
V {YT + } = V {T T +1 } + + V {T + 1T + } .

(13)

Since all the s in (12) are i.i.d., all the variances in (13) are equal, and thus
we obtain the well-known
"square-root rule" for the projection of the standard
deviation Sd {YT + } = Sd {}. Note that we did not make any distributional
assumption such as normality to derive the square-root rule.
Simple results to project other moments under the random walk assumption
(6), such as skewness and kurtosis, can be found in Meucci (2010a) and Meucci
(2010i). Projecting the whole distribution is more challenging, but can still be
accomplished using Fourier transform techniques, see Albanese, Jackson, and
Wiberg (2004).
In the more general case where the drivers do not evolve as a random walk
(6), the projection can be obtained by redrawing scenarios according to the given
dynamics, see e.g. Meucci (2010b) for the parametric case and Paparoditis and
Politis (2009) for the empirical distribution.
Illustration. In our oversimplified normal example the projection can
be performed analytically. Indeed, from the normal distribution of the invariants (10) it follows, from the preservation of normality with the sum
of independent normal variables, that the sum of the invariants is normal
T t+ N( , 2 ). Thus we obtain for the distribution of the two risk
drivers at the horizon

ln ST +
s
ln sT
s
2s
N(
+
,
). (14)
ln
ln


2
T +

Pitfall. "...To project the market I assume normality and therefore I multiply the standard deviation by the square root of the horizon...". The square
root rule is true for all random walks with finite-variance invariants, regardless
of their distribution. However, the square-root rule only applies to the standard deviation and does not allow to determine all the other moments of the
distribution, unless the distribution is normal.

P4

Pricing

Now that we have the distribution of the risk drivers at the horizon YT + from
the Projection Step P 3, we are ready to compute the distribution of the security
10

prices in our book. Recall that the value of the securities at the investment
horizon, by design, is fully determined by a) risk drivers at the horizon YT +
and b) non-random information iT known at the current time, such as terms
and conditions
PT + = p (YT + ; iT ) .
(15)
Then, given the security price at the horizon PT + , the security P&L from the
current date to horizon T T + is the dierence between the horizon value
(15), which is a random variable, and the current value, which is observable and
thus part of the available information set iT . Thus the horizon profit function
reads
T T + = p (YT + ; iT ) pT .
(16)

Note that the P&L must be adjusted for coupons and dividends, either by
reinvesting them in the pricing function (15), or by an additional cash flow term
in (16).
Key concept. The Pricing Step is the process of obtaining the distribution of the securities P&Ls over the investment horizon from the distribution of the risk drivers at the horizon and current information such as
terms and conditions, by means of the pricing function
fYT + , iT

fT T +

(17)

At times it is convenient to approximate the pricing function (15) by its


Taylor expansion
yy p (y; iT )
(y y)+ (18)
2
where y is a significative value of the risk drivers, often the current value y yT ;
y p (y; iT ) denotes the vector of the first derivatives; and yy p (y; iT ) denotes
the matrix of the second cross-derivatives.
Depending on its end users, the coecients in the Taylor approximation
(18) are known under dierent names. In the derivatives world, they are called
the "Greeks": theta, delta, gamma, vega, etc. In the fixed-income world the
coecients are called carry, duration and convexity.
p (y; iT ) = p (y; iT )+(y y)0 y p (y; iT )+(y y)0

Illustration. In our stock example, the single risk driver is the log-price
Yt ln St . Therefore the horizon pricing function (15) reads p (y) = ey . Its
Taylor approximation reads p (y) eyT (1 + y yT ). Then the P&L of the
stock (16) reads
s,T T + sT (ln ST + ln sT ) .
(19)
Hence, from the distribution of the risk drivers (14), it follows that the distribution of the stock P&L is approximately normal
s,T T + N( sT s , s2T 2s ).
11

(20)

For our call option with strike k and expiry e, the risk drivers are the
log-price Ys,t ln St and the log-implied volatility Y,t ln t , as in (4).
The horizon pricing function (15) follows from the Black-Scholes formula (2)
and reads
pBS (ys , y ; iT ) = cBS (ys ln k, ey , e T ) .

(21)

When the investment horizon is much shorter than the time to expiry of
the option, i.e. e T , the following first-order Taylor approximation
suces to proxy the price pBS (ys , y ; iT ) pBS (ys,T , y,T ; iT ) + BS,T
(ys ys,T ) + vBS,T (y y,T ), where BS,T pBS (ys,T , y,T ) /ys is the
options current Black-Scholes "delta" and vBS,T pBS (ys,T , y,T ) /y is
the options current Black-Scholes "vega". Then the P&L of the call option
(16) reads
c,T T + (ln ST + ln sT ) BS,T + (ln T + ln T ) vBS,T .

(22)

We stated in the distribution of the risk drivers (14) that the log-changes
in (22) are jointly normal. Therefore, the distribution of the P&L is normal,
because the linear combination of jointly normal variables is normal
c,T T + N( c , 2c ),

(23)

where
c
2c

BS,T s + vBS,T
2
2BS,T 2s + vBS,T
2 + 2 BS,T vBS,T s .

(24)
(25)

Notice how the expectation of the call options P&L depends on the expectations of the stock compounded return and the expectation of the log-changes
in implied volatility, multiplied by the horizon . A similar relationship holds
for the standard deviation of the calls P&L.
It is worth noticing that pricing becomes a surprisingly easy task when the
distribution of the risk drivers is represented in terms of scenarios, as (16) is
simply repeated scenario-by-scenario by inputting discrete realized risk drivers
values.
We conclude the Pricing Step by highlighting two problems. First, a data
and analytics problem: in many companies there might not be readily available
pricing functions with all terms and conditions.
Second, the problem of liquidity risk. The pricing step assumes the existence of one single price, which is fully determined by the risk drivers Pt =
p (Yt ; iT ) as in (15). In reality, for any security there exist multiple possible
prices, which represent supply and demand. The actual execution price depends on supply and demand, on the size and style of the transaction, and on
other factors. As we will see, liquidity risk, which first comes to the surface
here in the Pricing Step, aects with increasing intensity the Evaluation Step P
12

7, the Optimization Step P 8, and the Execution Step P 9. We will discuss in


those steps methodologies to address liquidity risk.
Pitfall. "...The delta approximation gives rise to parametric risk models
that assume normality...". The Taylor approximation of the pricing function
can be paired with any distributional assumption, not necessarily normal, on
the risk drivers.
"...The goodness of the Taylor approximation depends on the specific security...". The goodness of the Taylor approximation depends on the security and
on the investment horizon: due to the square-root propagation of the standard
deviation (13), the longer the horizon, the wider the distribution of the risk
drivers. Therefore the approximation worsens with longer horizons.

P5

Aggregation

The Pricing Step P 4 yields the projected P&Ls of the single securities. The
Aggregation Step generates the P&L distribution for a portfolio with multiple
securities.
0
Consider a market of N securities, whose P&Ls (1 , . . . , N ) are
obtained from the horizon pricing function (16). Notice that for simplicity we
drop the subscript T T + , because it is understood that from now on the
Prayer focuses on the projected P&L between now and the future investment
horizon.
Consider a portfolio, which is defined by the holdings in each position at
the beginning of the period h (h1 , . . . , hN )0 . The holdings are the number of
shares for stocks, the number of standardized-notional contracts for swaps, the
number of standardized-face-value-bond for bonds, etc.
The portfolio P&L is determined by the "conservation law of money": the
total portfolio P&L is the sum of the holding in each security times the P&L
generated by each security
P
h = N
(26)
n=1 hn n ,
where we have assumed no rebalancing during the period.

Key concept. The Aggregation Step is the process of computing the


distribution of the portfolio P&L h by aggregating the joint distribution
of the securities P&L with the given holdings
f , h

fh

(27)

Given one single scenario for the risk drivers YT + and thus for the securities
P&Ls in (16), the computation of the portfolio P&L h is immediately determined by the conservation law of money (26) as the sum of the single-security
P&Ls in that scenario.
13

However, to arrive at the whole continuous distribution of the portfolio P&L


fh we must compute multiple integrals
Z
f ( 1 , . . . , N ) d 1 d N ,
(28)
fh (x) dx =
h0 dx

which is in general a very challenging operation. On the other hand, the computation of the aggregate P&L distribution becomes trivial when the market
is represented in terms of scenarios, as the conservation law of money (26) is
simply repeated in a discrete way scenario-by-scenario.
Illustration. In our example with a stock and a call option, whose P&Ls
are normally distributed, suppose we hold a positive or negative number hs
of shares of the stock and a positive or negative number hc of the call. Then
the total P&L follows from applying the aggregation rule (26) to the stock
P&L (19) and the option P&L (22) and thus reads
h

hs sT ln STsT+ + hc ( BS,T ln STsT+ + vBS,T ln TT+ )


= (hs sT + hc BS,T ) ln

ST +
sT

+ hc vBS,T ln

T +
T

(29)

Thus, from the joint normal assumption (14) and the fact that sums of jointly
normal variables are normal, the total portfolio is normally distributed. Isolating the horizon we obtain
h N( h , 2h ),

(30)

where
h
2h

(hs sT + hc BS,T ) s + hc vBS,T


2

2s

2
h2c vBS,T
2

(hs sT + hc BS,T )
+
+2 (hs sT + hc BS,T ) hc vBS,T s

(31)
(32)

Notice how both expectation and variance follow from the projection to the
horizon of the invariants distribution (10).
Above we described in full the aggregation step. However, this topic is not
complete without comparing the aggregation of the P&L with an equivalent,
more popular, yet more error-prone, formulation in terms of returns.
The reader is probably familiar with the notion of returns, which allow for
performance comparisons across dierent securities, and portfolio weights. The
return is the ratio of the P&L over the current price RT T + T T + /pT .
The weight
P of a security is its relative market value within
P the portfolio wn
hn pn,T / m hm pm,T and satisfies the "pie-chart" rule n wn = 1.
The conservation law of money (26) becomes easier to interpret in terms of
returns and weights, as the total portfolio return is the weighted average of the
single-security returns
P
Rh = N
(33)
n=1 wn Rn ,
14

where we dropped the horizon subscript for simplicity.


In the Prayer, we refrain from conceptualizing the aggregation and the subsequent steps in terms of returns, and we rely on returns only for interpretation
purposes, for the following reasons.
First, P&L and holdings are always unequivocal, whereas returns and weights
are subjective. Indeed, for leveraged securities, such as swaps and futures, the
definition of returns and weights is not straightforward. In these circumstances
we need to introduce a subjective "basis" denominator d known at the beginning of the return period, such that the return R /d is always defined, and
so is the weight, see Meucci (2010f).
Second, returns are often confused with the invariants, and thus incorrectly
used for estimation.
Third, the linear returns (pT + pT ) /pT which appear in the aggregation
rule (33) are often confused with the compounded returns ln (pT + /pT ), which
do not satisfy the aggregation rule.
Pitfall. "...Returns are invariants. Therefore we can estimate their distribution from their past realizations and aggregate this distribution to the portfolio
level using the weights...". Only in asset classes such as stocks do the concepts
of invariant and return dangerously overlap. Furthermore, even for stocks, the
projection does not apply directly to the returns, and thus one has to follow all
the steps of the Prayer, see Meucci (2010e).

P6

Attribution

With the Aggregation Step P 5, we have arrived at the projected portfolio


P&L distribution. In order to assess, manage, and hedge a portfolio with h
(h1 , . . . , hN )0 holdings, it is important to ascertain the sources of risk that aect
it. Given the distribution of the projected portfolio P&L, we would like to
0
identify a parsimonious set of relevant factors Z (Z1 , . . . , ZK ) that drive
the portfolio P&L and whose joint distribution with the portfolio P&L fh ,Z is
known.
More specifically, because the identification of the factors should be actionable and easy to interpret, the attribution should be linear. Thus, the attribution is defined by coecients bh (bh,1 , . . . , bh,K )0 , as follows
h =

PK

k=1 bh,k Zk .

(34)

Note that the attribution to arbitrary factors in general gives rise to a portfoliospecific residual. The formulation (34) covers this case, by setting such residual
as one of the factors Zk , with attribution coecient bh,k 1.

15

Key concept. The Attribution Step decomposes the projected portfolio


P&L linearly into a set of K relevant risk factors Z, yielding the K portfoliospecific exposures bh
fh ,Z 7 bh
(35)
The relevant question is which attribution factors Z to use. Naturally, different intentions of the trader or portfolio manager call for dierent choices of
attribution factors.
The most trivial attribution assigns the projected portfolio P&L back to
the contributions from each security, i.e. Zk k is the projected P&L from
the generic k-th security, bh,k hk are the holdings of the k-th security in the
portfolio, and the number of factors is K N , i.e. the number of securities.
Then the attribution equation (34) becomes the conservation law of money (26).
If on the other hand the trader wishes to hedge a given risk, say volatility risk,
then he will choose as a factor Zk the projected P&L k of a truly actionable
instrument, such as a variance swap, which might or might not have been part
of the original portfolio.
Alternatively, the portfolio manager might wish to monitor the exposure to
a given risk factor, without the need to hedge it. If for instance the manager
is interested in the total "vega" of its portfolio for example, then he will use
changes in implied volatility as one of the risk factors.
Furthermore, in case there exist too many possible factors or hedging instruments, the manager will want to express his portfolio as a function of only those
few factors that truly aect the P&L.
Notice that (34) is a portfolio-specific top-down linear factor model. The
flexible choice of the optimal attribution factors Z and optimal exposures bh
with flexible constraints which define this linear factor model, along with its
connections with the linear factor models introduced in the Estimation Step P
2, is the spirit of the "Factors on Demand" approach in Meucci (2010c).
Illustration. In our stock and option example, we look at a simple attribution (34) to the original sources of risk. Accordingly, we set as attribution
factors the stock compounded return Zs ln (ST + /sT ) and the implied
volatility log-change Z ln (T + / T ). Thus, we have K 2 factors.
From the expression of the portfolio P&L (29) we immediately obtain
h = bh,s Zs + bh, Z ,

(36)

where the total exposures to Zs and Z read respectively


bh,s hs sT + hc BS,T ,

16

bh, hc vBS,T .

(37)

Pitfall. "...If I use a factor model to estimate the returns distribution of


some stocks and I want my portfolio to be neutral to a given factor, I simply make
sure that the exposure to that factor is zero in my portfolio...". Ensuring a nullexposure coecient for one factor does not guarantee immunization, because
the given factor is in general correlated with other factors. To provide full
immunization we must resort to Factors on Demand.

P7

Evaluation

Up to this step, we have obtained the projected distribution of the P&L h


of a generic portfolio with holdings h and attributed it to relevant risk factors
Z. In the evaluation step, the goal is to compare the P&L distribution of the
current portfolio h with the P&L distribution of a dierent potential portfolio
e
h. Evaluation is one of the risk and portfolio managers primary tasks.
Since each portfolio is represented by the whole projected distribution of
its P&L, it is not possible to compare two portfolios in terms of which P&L
is higher. To obviate this problem, typically practitioners rely on one or more
summary statistics for the projected P&L distribution.
The most standard statistics are the expected value, the standard deviation and the Sharpe ratio, also known respectively as expected outperformance,
tracking error and information ratio in the case of benchmarked portfolio management. Other measures include the value at risk (VaR), the expected shortfall
(ES or CVaR), skewness, kurtosis, etc. More innovative statistics include coherent measures of risk aversion, see Artzner, Delbaen, Eber, and Heath (1997);
spectral measures of risk aversion, see Acerbi (2002); and measures of diversification, such as the "eective number of bets", see Meucci (2009a).
We emphasize that, in this context, all the above are ex-ante measures of
risk for the projected portfolio P&L h , rather than ex-post measures of performance.
Key concept. The Evaluation Step consists of two sub-steps. The first
sub-step is the computation of one or more summary statistics S for the
projected distribution of the given portfolio P&L h with holdings h
fh

S (h) .

(38)

The second, optional, sub-step is the attribution of the summary statistics


S(h) to the fully flexible attribution factors Z utilized in the Attribution
Step
P
fh ,Z , bh 7 S (h) = K
(39)
k=1 bh,k Sk ,
where bh,k represents the "amount" of the factor Zk in the portfolio projected P&L and Sk represents the "per-unit" contribution to the statistic
S (h) from the factor Zk .

17

Illustration. In our simple normal market of one stock and one option,
any portfolio is determined by the holdings h (hs , hc )0 . Let us focus on the
first sub-step (38) and let us compute the most basic summary statistics of
the P&L, namely its expected value. Then from the distribution of a generic
portfolio P&L (30) we obtain
S (hs , hc ) E {h } = h = hs sT s + hc ( BS,T s + vBS,T ) .

(40)

Similarly, if the manager cares about a measure of volatility, a suitable measure is the standard deviation

S (hs , hc ) Sd {h } = h ,
(41)
where h is defined in (32).
For the optional summary statistics attribution sub-step (39), a simple linear
decomposition that mirrors the attribution equation (34) is not feasible. For inP
stance, for the standard deviation it is well known that Sd {h } 6= K
k=1 bh,k Sd {Zk }.
However, notice that numerous summary statistics such as expectation, standard deviation, VaR, ES, and spectral measures display an interesting feature:
they are homogeneous, i.e. by doubling all the holdings in the portfolio, those
summary statistics also double. As proved by Euler, for homogeneous statistics
the following identity holds true
S (h) =

PK

k=1 bh,k

S (h)
.
bh,k

(42)

Therefore, if the summary statistics is homogeneous, we can take advantage


of Eulers identity (42) to perform the summary statistics attribution sub-step
(39), which becomes (42).
In particular, for the VaR, the decomposition (42) amounts to the classical
definition of marginal contributions to VaR, see e.g. Garman (1997), and, for
the standard deviation, the decomposition (42) amounts to the "hot-spots", see
Litterman (1996).
We recall that the simplest case of the flexible, top-down, Factors on Demand attribution of the portfolio P&L (34) is the bottom-up attribution to the
individual securities through the conservation law of money (26). Similarly, the
simplest case of attribution of the summary statistics (39) is the attribution of
the summary statistics S (h) to the individual securities
S (h) =

PN

n=1

hn

S (h)
.
hn

(43)

Illustration. To illustrate the attribution to a summary statistic of the


portfolio projected P&L, we rely on our example of a stock and a call option.
We focus on the standard deviation (41).
The exposure bh,s of the projected portfolio P&L (34) to the stock factor
Zs ln (ST + /sT ) and the exposure bh, to the implied volatility factor
18

Z ln (T + / T ) were calculated in (37). Then the attribution (42) to each


of the two risk drivers of the standard deviation of the projected portfolio
P&L becomes
Sd{ } !

2s
hs sT + hc BS,T
s
bh,s
=
,
(44)
Sd{h }
s
2
hc vBS,T
h
bh,

where h is defined in (32), see the proof in the appendix. The total contributions to risk from the factors follow by multiplying the entries on the left
hand side of (44) by the respective exposures (37).
For the attribution to the individual securities, i.e. the stock and the call
option, a similar calculation yields

Sd{h }

s2T 2s s ,c
hs
hs
=
,
(45)
Sd{h }
s ,c
2c
hc
h
h

where
2c
s ,c

2
2s 2BS,T + 2 vBS,T
+ 2 s BS,T vBS,T

BS,T sT 2s

+ sT vBS,T s ,

(46)
(47)

see the proof in the appendix. The total contributions to risk from the stock
and the call option follow by multiplying the entries on the left hand side of
(45) by the respective holdings hs and hc .
The computation of the summary statistics S (h) is hard to perform in practice, unless the market is normal as in our example (41), because complex multiple integrals are involved. For instance, using the same notation as in (28), the
VaR with confidence c is defined by
Z
1c
f ( 1 , . . . , N ) d 1 d N .
(48)
h0 V aR

To address this problem, one can rely on approximation methods such as the
Cornish-Fisher expansion, or the elliptical assumption, see Meucci (2005a) for
a review. The computation of the partial derivatives for the decomposition (42)
of the summary statistics is even harder, unless the market is normal as in our
example (44)-(45). Fortunately, these computations become simple when the
market distribution is represented in terms of scenarios, see Meucci (2010c).
Before concluding, we must address two key problems of risk and portfolio management: estimation risk, introduced in the Estimation Step P 2, and
liquidity risk, introduced in the Pricing Step P 4.
As far as estimation risk is concerned, the projected distribution of the P&L
h that we are evaluating is only an estimate, not the true projected distribution, which is unknown. Therefore, estimation risk aects the Evaluation Step.
As a simple, eective way to address this issue, risk managers perform stresstest or scenario analysis, which amounts to evaluating the P&L under specific,
19

typically extreme or historical, realizations of the risk drivers. A more advanced


general approach to stress testing is "Fully Flexible Probabilities", see Meucci
(2010d), which allows the portfolio manager to assign non-equal probabilities
to the historical scenarios, according to such criteria as exponential smoothing,
rolling window, kernel conditioning and, more flexibly, the generalized Bayesian
approach "entropy pooling".
As far as liquidity risk is concerned, the projected distribution of the P&L
h that we are evaluating does not account for the eect of our own trading. A
theory to correct for this eect in the context of risk management was developed
in Cetin, R., and Protter (2004) and Acerbi and Scandolo (2007). For an easy
to implement liquidity adjustment to the P&L distribution refer to Meucci and
Pasquali (2010).
Pitfall. "...To compute the volatility of the P&L we can simply run the
sample standard deviation of the past P&L realizations...". The history of the
past P&L can be informative only if the P&L is an invariant. This seldom
happens, consider for instance the P&L generated by a buy-and-hold strategy
in one call option. In general, one has to follow all the steps of the Prayer to
compute risk numbers.
Pitfall. "...To compute the VaR I can multiply the standard deviation by
a threshold number such as 1.96...". This calculation is only correct with very
specific, unrealistic, typically normal, models for the market distribution.

P8

Optimization

In the Evaluation Step P 7, the risk manager or portfolio manager obtains


a set of computed summary statistics S to assess the goodness of a portfolio
with holdings h. These statistics can be combined in a subjective manner to
give rise to new statistics. For instance, a portfolio with expected return of
2% and standard deviation of 5% could be good for an investor with low risk
tolerance, but bad for an aggressive trader. In this case, a trade-o statistic
S (h) E {h } Sd {h } can rank the portfolios according to the preferences of the investor, reflected in the parameter . Alternatively, we can use
a subjective utility function u and rank portfolios based on expected utility
S (h) E {u (h )}.
More in general, we call index of satisfaction the function that translates
the P&L distribution of the portfolio with holdings h (h1 , . . . , hN )0 into a
personal preference ranking. We denote the index of satisfaction by the general
notation S (h) used in (38) for the evaluation summary statistics, because any
index of satisfaction is also a summary statistic.
Given an index of satisfaction S (h), it is now possible to optimize the holdings h accordingly. Portfolio optimization is the primary task of the portfolio
manager.
Clearly, the optimal allocation should not violate a set of hard constraints,
such as the budget constraint, or soft constraints, such as constraints on leverage,
20

risk, etc. We denote by C the set of all such constraints and by "h C" the
condition that the allocation h satisfies the given constraints.
Key concept. The Optimization Step is the process of computing
the holdings that maximize satisfaction, while not violating a given set of
investment constraints
h argmax {S (h)} .

(49)

hC

We emphasize that the choice of the most suitable index of satisfaction S,


as well as the specific constraints C, vary widely depending on the profile of the
securities P&L distribution, the investment horizon, and other features of the
market and the investor.
Illustration. In our stock and option example we can compute the best
hedge for one call option. In this context, the general framework (49) becomes
(hs , hc ) argmax { Sd {h }} .

(50)

hc 1

Then the first order condition on the P&L standard deviation, computed in
(30)-(32), yields
BS,T
vBS,T
hs

.
(51)
sT
sT
s
If the correlation between implied volatility and underlying were null, the
best hedge would consist in shorting a "delta" amount of underlying. In general is substantially negative: for instance, the sample correlation between
VIX and S&P 500 is 0.7. Therefore, a correction to the simplistic delta
hedge must be applied.
In general, the numerical optimization (49) is a challenging task. To address
this issue one can resort to the two-step mean-variance heuristic. First, the
mean-variance ecient frontier is computed
h argmax {E {h } Vr {h }} ,
hC

R.

(52)

This step reduces the dimension of the problem from N , the dimension of the
market, to 1, the value of . The optimization (52) can be solved by variations of
quadratic programming. The optimization becomes particularly ecient when
a linear factor model makes the covariance of the securities P&Ls sparse, see
Meucci (2010h).
Second, the optimal portfolio is selected by a one-dimensional search
h argmax {S (h )} .
R

21

(53)

The optimization (53) can be performed by a simple grid-search.


As it was the case for the Evaluation Step P 7, we must address estimation risk, introduced in the Estimation Step P 2: the projected distribution of
the P&L that we are optimizing is only an estimate, not the true projected
distribution, which is unknown. As it turns out, the optimal portfolio is extremely sensitive to the input estimated distribution, which makes estimation
risk particularly relevant for the Optimization Step P 8.
To address the issue of estimation risk, portfolio managers rely on more
advanced approaches than the simple two-step mean-variance heuristic (52)(53). These advanced approaches include robust optimization, which relies on
cone programming, see Ben-Tal and Nemirovski (2001) and Cornuejols and Tutuncu (2007); Bayesian allocation, see Bawa, Brown, and Klein (1979); robust
Bayesian allocation, see Meucci (2005b); and resampling, see Michaud (1998).
We refer to Meucci (2005a) for an in-depth review.
Since estimation is imperfect, tactical portfolio construction enhances performance by blending market views and predictive signals into the estimated
market distribution. Well-known techniques to perform tactical portfolio construction are the approach by Grinold and Kahn (1999), which mixes signals
based on linear factor models for returns; the Bayesian inspired methodology by
Black and Litterman (1990); and the generalized Bayesian approach "Entropy
Pooling" in Meucci (2008).
Due to the rapid decay of the quality of predictive tactical signals, managers
separate tactical portfolio construction from strategic rebalancing, which takes
into account shortfall and drawdown control and is optimized based on techniques that range from dynamic programming to heuristics, see e.g. Merton
(1992), Grossman and Zhou (1993), Browne and Kosowski (2010), and refer to
Meucci (2010g) for a review and code.
Finally, liquidity risk, discussed in the Pricing Step P 4, impacts the Optimization Step: transaction costs must be paid to reallocate capital and the
process of executing a transaction impacts the execution price. Therefore, market impact models must be embedded in the portfolio optimization process. The
standard approach in this direction is a power-law impact model, see e.g. Keim
and Madhavan (1998).
Pitfall. "...Mean-variance assumes normality...". The mean-variance approach does not assume normality: any market distribution can be fed into the
two-step process (52)-(53).

P9

Execution

The Optimization Step P 8 delivers a desired allocation h (h1 , . . . , hN )0 . To


achieve the desired allocation, it is necessary to rebalance the positions from
0
the current allocation hT (h1,T , . . . , hN,T ) . This rebalancing is not executed
immediately. As time evolves, the external market conditions change. Simultaneously, the internal state of the book, represented by the updated allocation,
22

the updated constraints, etc., changes dynamically. To execute a rebalancing


trade, this information must be optimally processed.
Key concept. The Execution Step processes the evolving external
b
market information im
t and internal book information it to attain the

target portfolio h by a sequence of transactions at given prices pt


0
(pt,1 , . . . , pt,N )
b
h , {im
7 {pt }tT .
(54)
t }tT , it tT
Note that often the execution step is implemented in aggregate across different books. This aggregation is particularly useful, as it allows for netting of
conflicting buy-sell orders from dierent traders or managers. Performing this
netting in the Optimization Step P 8 would be advisable, see e.g.OCinneide,
Scherer, and X. (2006) and Stubbs and Vandenbussche (2007). However, this
can be hard in practice.
Execution is closely related to liquidity risk, first introduced in the Pricing Step P 4. The literature on liquidity, market impact, algorithmic trading
and optimal execution is very broad, see e.g. Almgren and Chriss (2000) and
Gatheral (2010).
Illustration. For illustrative purposes, we mention the simplest execution algorithm, namely "trading at all costs". This approach disregards any
information on the market or the book and delivers immediately the desired
final allocation by depleting the cash reserve. We emphasize that trading at
all costs can be heavily suboptimal.
Pitfall. "...The Execution Step P 9 should be embedded into the Optimization Step P 9...". In practice it is not possible to process simultaneously real-time
information and all the previous steps of the Prayer. Furthermore, execution
works best across all books, whereas optimization is specific to each individual
manager.

P 10

Ex-Post Analysis

In the Execution Step P 9 we implemented the allocation h (h1 , . . . , hN ) for


the period between the current date T and the investment horizon T + . Upon
reaching the horizon, we must evaluate the P&L h realized over the horizon
by the allocation, where the lower-case notation emphasizes that the P&L is no
longer a random variable, but rather a number that we observe ex-post.

23

Key concept. The Ex-Post Analysis Step identifies the contributions


to the realized P&L from dierent decision makers and market factors
h

( a , b , ) .

(55)

Ex-post performance analysis is a broad subject that attracts tremendous


attention from practitioners, as their compensation is ultimately tied to the
results of this analysis. Ex-post performance can be broken down into two
components: performance of the target portfolio from the Optimization Step P
8 and slippage performance from the Execution Step P 9.
To analyze the ex-post performance of the target portfolio, the most basic
framework decomposes this performance into an allocation term and a selection term, see e.g. Brinson and Fachler (1985). More recent work attributes
performance to dierent factors, such as foreign exchange swings or yield curve
movements, consistently with the Attribution Step P 6.
The slippage component can be decomposed into unexecuted trades and
implementation shortfall attributable to market impact, see Perold (1988).
Furthermore, performance must be fairly decomposed across dierent periods, see e.g. Carino (1999) and Menchero (2000).
Illustration. In our stock and option example, we can decompose the
realized P&L into the cost incurred by the "trading at all costs" strategy,
a stock component, an implied volatility component, and a residual. In
particular, the stock component reads bh,s ln (sT + /sT ) as in (36), and the
implied volatility component reads bh,s ln ( T + /sT ) The residual is the plugin term that makes the sum of all components add up to the total realized
P&L.
Pitfall. "...I prefer geometric performance attribution, because it can be
aggregated exactly across time and across currencies...". The geometric, or multiplicative approach to ex-post performance is arguably less intuitive, because
it does not accommodate naturally a linear decomposition in terms of dierent
risk or decisions factors.

24

References
Acerbi, C., 2002, Spectral measures of risk: A coherent representation of subjective risk aversion, Journal of Banking and Finance 26, 15051518.
, and G. Scandolo, 2007, Liquidity risk theory and coherent measures of
risk, Working Paper.
Albanese, C., K. Jackson, and P. Wiberg, 2004, A new Fourier transform algorithm for value at risk, Quantitative Finance 4, 328338.
Almgren, R., and N. Chriss, 2000, Optimal execution of portfolio transactions,
Journal of Risk 3, 539.
Artzner, P., F. Delbaen, J. M. Eber, and D. Heath, 1997, Thinking coherently,
Risk Magazine 10, 6871.
Bawa, V. S., S. J. Brown, and R. W. Klein, 1979, Estimation Risk and Optimal
Porfolio Choice (North Holland).
Ben-Tal, A., and A. Nemirovski, 2001, Lectures on modern convex optimization:
analysis, algorithms, and engineering applications (Society for Industrial and
Applied Mathematics).
Black, F., and R. Litterman, 1990, Asset allocation: combining investor views
with market equilibrium, Goldman Sachs Fixed Income Research.
Brinson, G., and N. Fachler, 1985, Measuring non-US equity portfolio performance, Journal of Portfolio Management pp. 7376.
Browne, S., and R. Kosowski, 2010, Drawdown minimization, Encyclopedia of
Quantitative Finance, Wiley.
Carino, D., 1999, Combining attribution eects over time, Journal of Performance Measurement pp. 514.
Cetin, U, Jarrow R., and P. Protter, 2004, Liquidity risk and arbitrage pricing
theory, Finance and Stochastics 8, 311341.
Cherubini, U., E. Luciano, and W. Vecchiato, 2004, Copula Methods in Finance
(Wiley).
Cornuejols, G., and R. Tutuncu, 2007, Optimization Methods in Finance (Cambridge University Press).
Garman, M., 1997, Taking VaR to pieces, Risk 10, 7071.
Gatheral, J., 2010, No-dynamic-arbitrage and market impact, Quantitative Finance 10, 749759.

25

Grinold, R. C., and R. Kahn, 1999, Active Portfolio Management. A Quantitative Approach for Producing Superior Returns and Controlling Risk
(McGraw-Hill) 2nd edn.
Grossman, S., and Z. Zhou, 1993, Optimal investment strategies for controlling
drawdowns, Mathematical Finance 3, 241276.
Keim, D. B., and A. Madhavan, 1998, The cost of institutional equity trades:
An overview, Rodney L. White Center for Financial Research Working Paper
Series.
Litterman, R., 1996, Hot spots and hedges, Goldman Sachs and Co., Risk Management Series.
Menchero, J., 2000, An optimized approach to linking attribution eects over
time, Journal of Performance Measurement 5.
Merton, R. C., 1992, Continuous-Time Finance (Blackwell).
Meucci, A., 2005a, Risk and Asset Allocation (Springer) Available at
http://symmys.com.
, 2005b, Robust Bayesian asset allocation, Working Paper Article and
code available at http://symmys.com/node/102.
, 2008, Fully flexible views: Theory and practice, Risk 21, 97102 Article
and code available at http://symmys.com/node/158.
, 2009a, Managing diversification, Risk 22, 7479 Article and code available at http://symmys.com/node/199.
, 2009b, Review of discrete and continuous processes in finance:
Theory and applications, Working paper Article and code available at
http://symmys.com/node/131.
, 2009c, Review of statistical arbitrage, cointegration, and multivariate Ornstein-Uhlenbeck, Working Paper Article and code available at
http://symmys.com/node/132.
, 2010a, Annualization and general projection of skweness, kurtosis, and all summary statistics, GARP Risk Professional - "The Quant
Classroom by Attilio Meucci" August, 5556 Article and code available at
http://symmys.com/node/136.
, 2010b, Common misconceptions about beta - hedging, estimation and horizon eects, GARP Risk Professional - "The Quant Classroom by Attilio Meucci" June, 4245 Article and code available at
http://symmys.com/node/165.

26

, 2010c, Factors on Demand - building a platform for portfolio managers risk managers and traders, Risk 23, 8489 Article and code available at
http://symmys.com/node/164.
, 2010d, Historical scenarios with fully flexible probabilities, GARP Risk
Professional - "The Quant Classroom by Attilio Meucci" December, 4043
Article and code available at http://symmys.com/node/150.
, 2010e, Linear vs. compounded returns - common pitfalls in
portfolio management, GARP Risk Professional - "The Quant Classroom by Attilio Meucci" April, 5254 Article and code available at
http://symmys.com/node/141.
, 2010f, Return calculations for leveraged securities and portfolios,
GARP Risk Professional - "The Quant Classroom by Attilio Meucci" October, 4043 Available at http://symmys.com/node/140.
, 2010g, Review of dynamic allocation strategies: Convex versus concave management, Working Paper Article and code available at
http://symmys.com/node/153.
, 2010h, Review of linear factor models: Unexpected common features
and the systematic-plus-idiosyncratic myth, Working paper Article and code
available at http://www.symmys.com/node/336.
, 2010i, Square-root rule, covariances and ellipsoids - how to analyze
and visualize the propagation of risk, GARP Risk Professional - "The Quant
Classroom by Attilio Meucci" February, 5253 Article and code available at
http://symmys.com/node/137.
, 2011a, New copulas for risk and portfolio management, Risk 24, 8386
Article and code available at http://symmys.com/node/335.
, 2011b, "P" versus "Q": Dierences and commonalities between the
two areas of quantitative finance, GARP Risk Professional - "The Quant
Classroom by Attilio Meucci" February, 4344 Article and code available at
http://symmys.com/node/62.
, and S. Pasquali, 2010, Liquidity-adjusted portfolio distribution and liquidity score, Working Paper Article and code available at
http://symmys.com/node/350.
Michaud, R. O., 1998, Ecient Asset Management: A Practical Guide to Stock
Portfolio Optimization and Asset Allocation (Harvard Business School Press).
OCinneide, C., B. Scherer, and Xu X., 2006, Pooling trades in quantitative
invest- ment process, Journal of Investing 32, 3343.

27

Paparoditis, E., and D. N. Politis, 2009, Resampling and subsampling for financial time series, in Handbook of Financial Time Series, (Andersen, T.,
Davis, R., Kreiss, J.-P., and Mikosch, T., Eds.), Springer, Berlin-Heidelberg
pp. 983999.
Perold, A. F., 1988, The implementation shortfall: Paper vs. reality, Journal of
Portfolio Management 14, 49.
Stubbs, R. A., and D. Vandenbussche, 2007, Multi-portfolio optimization and
fairness in allocation of trades, Axioma Research Paper No. 013.

28

Appendix

First, consider the following rule, which holds for any square matrix a and
conformable vector x

ax
x0 ax
.
(56)
=
x
x0 ax
To prove (44) we recall from the attribution to the risk factors (37) that the
standard deviation (41) reads

bh,s
2s
s
. (57)
2h = (Sd {h })2 = bh,s bh,
s
2
bh,

Then (44) follows from (56).


To prove (45), we recall that the stock P&L (19) reads
s sT (ln ST + ln sT )

(58)

and the call option P&L (22) reads


c (ln ST + ln sT ) BS,T + (ln T + ln T ) vBS,T

(59)

We recall from (14) that all the log-changes above are jointly normal. Therefore
the entries of covariance matrix read
2s
2c

s ,c

V {s } = V {sT (ln ST + ln sT )} = s2T 2s

(60)

V {c } = V {(ln ST + ln sT ) BS,T + (ln T + ln T ) vBS,T


(61)
}
= V {(ln ST + ln sT ) BS,T } + V {(ln T + ln T ) vBS,T }
+2 Cv {(ln ST + ln sT ) BS,T , (ln T + ln T ) vBS,T }

2
= 2s 2BS,T + 2 vBS,T
+ 2 s BS,T vBS,T
Cv {s , c } = Cv{sT (ln ST + ln sT ) ,
(62)
(ln ST + ln sT ) BS,T + (ln T + ln T ) vBS,T }
= Cv {sT (ln ST + ln sT ) , (ln ST + ln sT ) BS,T }
+ Cv {sT (ln ST + ln sT ) , (ln T + ln T ) vBS,T }

= BS,T sT 2s + sT vBS,T s ,

where Cv {X, Y } denotes the covariance between X and Y .


As in (29), the portfolio P&L reads
h = hs s + hc c
Thus the standard deviation (41) reads

(Sd {h })2 = hs hc
Then (45) follows from (56).

29

2s
s ,c

(63)

s ,c
2c

hs
hc

(64)

You might also like