0% found this document useful (0 votes)
134 views104 pages

Quant Factor Investing Book PDF

Uploaded by

bhaskar2400
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views104 pages

Quant Factor Investing Book PDF

Uploaded by

bhaskar2400
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

Quant Factor Investing

Nick Bird CFA


OQ Funds Management
2
Contents
Introduction .......................................................................................................................... 4
Value Factors......................................................................................................................... 5
Growth Factors ..................................................................................................................... 9
Momentum Factors ............................................................................................................ 11
Sentiment Factors ............................................................................................................... 14
Certainty Factors ................................................................................................................. 18
Debt Safety Factors ............................................................................................................. 19
Backtesting Quant Factors .................................................................................................. 20
Combining Quant Factor Scores ......................................................................................... 26
Portfolio Construction and Rebalancing ............................................................................. 29
Factor Timing ...................................................................................................................... 33
Event Anomalies ................................................................................................................. 37
Quant Factor Investing Pitfalls ............................................................................................ 40
Appendix A: DCF Methodology ........................................................................................... 44
Appendix B: Pro-Rating Data .............................................................................................. 47
Appendix C: Using Data....................................................................................................... 48
General Appendices ................................................................................................................. 53
Smart Beta........................................................................................................................... 53
Day Trading ......................................................................................................................... 55
Short Selling ........................................................................................................................ 57
Listed Closed-End Funds ..................................................................................................... 61
Fund Manager Fees............................................................................................................. 63
Financial Markets and Interest Rates ................................................................................. 65
Market Impact Costs and Fund Capacity ............................................................................ 68
Stop-Loss Orders ................................................................................................................. 71
Efficient Market Hypothesis ............................................................................................... 72
Value vs Growth Investing .................................................................................................. 75
Risk, Beta and Volatility ...................................................................................................... 77
Get Rich Quick! ................................................................................................................... 80
The Great Quant Unwind .................................................................................................... 81
Investment Dictionary.............................................................................................................. 84

3
Introduction
Quant factor investing is an investment style that involves exploiting share market
mispricing opportunities by using quant factor scores.
A simple example is buying companies which are cheap based on value factors to exploit
investors’ tendency to overlook cheap companies that many investors consider to be boring.
You could further enhance this strategy by excluding companies which, based on their factor
scores, you believe are cheap for a valid reason (eg because they have poor earnings
certainty).
Another example is buying companies which have positive analyst earnings revisions to
exploit trending in analyst forecasts.
Although it sounds simple, there are lots of things you need to know if you want to
successfully buy and sell companies using a quant factor investment strategy. What are
the best factor calculation methodologies? How do you measure the predictive power of
quant factors? What is the best way to combine factors? What is the ideal holding period
and when should you exit positions? What are the pitfalls you should avoid?
This book aims to answer these questions by providing an in-depth analysis of quant factor
investing.

What is a Quant Factor?


A quant factor can be anything about a company which you can measure. It could be
based on readily available data such as share price returns, or “alternative” data such as the
number of cars in retail parking lots collected from satellite images.
Quant factors should have intuitive appeal as predictors of stock returns. This is important
because there are lots of data fields from which you can calculate a huge number of quant
factors. And if you test enough quant factors, you will identify factors that work purely by
chance.
Quant factors should also exhibit predictive power, either in isolation or in combination
with other factors. This determination should be made based on robust backtests. Ideally,
quant factors should work based on in-sample and out-of-sample tests and perform
consistently over time.
Even after excluding potential factors that have no intuitive appeal and do not exhibit any
meaningful predictive power, there are numerous quant factors that are worthy candidates
for selecting stocks. In fact, there are so many it is important to categorize them into
different groups based on what they measure.
In this book, I divide quant factors into six groups: Value, Growth, Momentum, Sentiment,
Certainty, and Debt Stress.

4
Value Factors
Value factors compare a company’s share price with any field or combination of fields on
the company’s financial statements. The objective is to determine whether the company’s
share price is low (ie cheap) or high (ie expensive) relative to its financials and fundamental
value.

A value factor could be a simple ratio such as:

 Dividend Yield (Dividend Per Share / Price) – measuring how much a company pays
its shareholders in dividends each year relative to its share price.
 PE Ratio (Price / Earnings Per Share) – measuring the number of years of earnings it
would take to recoup the initial investment, assuming earnings remain constant;
typically investors focus on operating earnings which exclude one-off gains or (more
commonly) losses, rather than reported earnings.
 Price to Book (Price / Book Value Per Share) – measuring whether a company is
cheap or expensive by comparing its share price to the value of the net assets on its
balance sheet.
Value factors can also be calculated using more sophisticated methodologies. Discounted
cashflow (DCF) methodologies can be used, for example, which predict long-term
company cashflows, discount these cashflows back to today’s dollars using an appropriate
discount rate, and aggregate these cashflows to determine a fair market value. A
potential advantage of this approach is it can be used to value low growth and high growth
companies. Please refer to Appendix A for more details on a robust DCF methodology.
Adjusting the data feeding into value factors provides for an additional layer of
sophistication. Consider, for example, a forecast PE Ratio. Rather than using the consensus
or average analyst EPS forecast, you could use a smart consensus forecast that adjusts
earnings based on the dispersion of individual analyst forecasts. This makes sense given
that when there is a wide dispersion of analyst forecasts, the average forecast tends to be
too high.
Value factors can be calculated using historical or forecast data (or a combination of
both). Historical data is available for a vast array of financial fields for all companies.
Forecast data is only available for a relatively small subset of fields for companies covered
by analysts who submit their forecasts to data vendors.
It is important to adjust the financial data for each company’s fiscal year-end. These not
only vary from country to country but also within countries. To illustrate: if company A has
a March fiscal year-end and company B has a December fiscal year-end, forecasts for the
same year represent time periods which are 9 months apart. A pro-rating methodology
should be used to ensure the same time-period is used when calculating value factors using
forecast data. Details on how to pro-rate financial data to deal with different fiscal year-
ends is provided in Appendix B.

5
Value Factor Characteristics & Observations
 Value factors are prone to periods of underperformance
The predictive power of value factors is premised on the notion that a company’s
share price will ultimately reflect its fundamental value. While this sounds like a
valid assumption, there can be extended periods when due to irrational exuberance
or other market forces, share prices become disconnected from reality. This
occurred, for example, during the tech boom in the late 1990s. Numerous
technology companies which were expensive based on valuation measures
continued to get more expensive over a prolonged period. During 2019 and 2020,
we witnessed a similar market environment which severely adversely impacted the
performance of value factors.
 Value factors tend to work better in a rising interest rate
environment
Investors typically ascribe strong future earnings growth potential to stocks that look
expensive based on valuation factors. The investment rationale is that although the
company looks expensive based on its current financial data and its short-term
outlook, it will experience strong growth in the future. These future cashflows are
worth less when interest rates rise as a higher discount rate dilutes their worth.
Conversely when interest rates fall, these future cashflows are worth more in today’s
dollars, thereby making it easier to justify expensive valuations for stocks with strong
growth potential.
 Strong retail investor flows can adversely impact the performance of
valuation factors
Retail investors tend to follow momentum and growth strategies. Put simply, they
chase what’s “hot”. To many investors, a good company which has strong share
price momentum is a good buy regardless of its valuation (eg Tesla). Strong retail
flows, therefore, can result in expensive stocks getting more expensive and cheap
stocks getting cheaper in a relative sense. This adversely impacts the performance
of valuation factors, although it does set the scene for value factors to perform
strongly when retail investor participation wanes and the divergence in valuation
levels for cheap and expensive stocks has become unsustainably large.
 Value factors have relatively poor short-term predictive power but
relatively strong long- term predictive power
Value factors exhibit relatively weak predictive power in the short term, for example
up to 3 months. This is largely due to the tendency of cheap stocks to get cheaper
and expensive stocks to get more expensive in the short term. Over longer
investment time horizons, however, cheap stocks tend to outperform and expensive
stocks tend to underperform.
 Cheap companies are sometimes cheap for a valid reason
Sometimes a company looks cheap because of issues that are not captured in the
data or valuation methodology. For example, a company could look cheap because

6
of structural challenges (eg a newspaper company), regulatory risks (eg a utliity) or
curtailed future earnings (eg a mining company with a limited mine life). These
stocks are referred to as value traps.

Cheap stocks can be value traps

 Sophisticated methodologies and combining value factors with other


factors help address the deficiencies of value factors
The concept of value is simple and intuitive - if two companies have the same share
price but one company generates stronger profits and pays higher dividends, prima
facie this company looks more attractive as an investment opportunity. However,
there are a range of issues that need to be taken into consideration when properly
valuing companies and this requires the use of relatively sophisticated and robust
valuation methodologies. Combining value factors with other quant factors also
helps avoid value traps. For example, value factors combine well with certainty and
growth factors.
 Value ratios should include Price in the denominator
To understand the reason for including price in the denominator of ratios, consider
the most commonly referenced valuation ratio: the PE Ratio. This ratio has intuitive
appeal as investors can easily relate to the number of years of earnings required to
pay back the purchase price (assuming no change in earnings). However, from a
quant perspective, the PE ratio is highly problematic for stocks with negative
earnings as a negative PE Ratio makes no sense. And as EPS approaches zero, the PE
Ratio increases exponentially. The easiest and best way to address this issue is to
calculate the inverse of the PE Ratio, the Earnings Yield. This also applies to another
commonly used valuation ratio – Price to Book Value – as a company’s Book Value
can be very close to zero and in some cases negative. The inverse of this ratio –
Book Yield – is a superior valuation measure for performing quant analysis.

7
 Value factors typically have a size bias
Large capitalization companies typically look more expensive than small
capitalization companies, although this relationship isn’t consistent over time. This
potential bias can be addressed by cross-sectionally regressing value factor scores
against a measure of size (such as the log of market capitalization and using) and
using the residual as the value factor score.
Value

Size

Value factor size bias

 Value factors should also be analyzed on a sector relative basis


Companies in some sectors always tend to look expensive, while companies in other
sectors always tend to look cheap. To address this issue, it is advisable to compare a
company’s value factor scores with its peers in the same sector.
 Buybacks have potentially reduced the usefulness of Dividend Yield
In some countries, companies are preferring to return money to shareholders via
buybacks rather than dividend payments. This is particularly prevalent in the United
States. Consequently, some quants believe that Shareholder Yield (which adds
share repurchases to dividend payments and deducts share issuance) is superior to
Dividend Yield.
 Historical value factors for cyclical companies can be misleading
At the top – or just past the top - of the economic cycle, cyclical companies will
typically be generating relatively strong earnings and cashflows. The share prices of
these companies, however, may have started to decline in anticipation of tougher
market conditions. High earnings and cashflows combined with declining share
prices often result in strong valuation scores. These valuation scores, however, can
be misleading given the diminished future earnings prospects. Focusing on value
factor scores based on analyst forecast can help address this issue for cyclical
companies.

8
Growth Factors
Growth factors measure the extent to which a company’s financial data (eg EPS) is
trending up or down and the nature of the trajectory. Growth factors can be calculated
using historical or forecast data (or a combination of both) for numerous data fields shown
on company financial statements.

When calculating growth factors using historical per share data such as EPS, the data must
be fully diluted for corporate actions such as stock splits to ensure meaningful comparisons
can be made.
Percentage changes are the simplest and most intuitive way to calculate growth factors.
This approach works well for data fields with positive and relatively large values, such as
revenue.
Percentage changes, however, do not provide a meaningful measure of growth when a
data field moves from positive to negative or negative to positive. They can also result in
misleading growth values when the denominator is small (eg EPS increases from $0.01 to
$0.50). Various methods can be used to moderate extreme growth values – for example
using a natural logarithm function – but none are perfect.
When calculating growth values for data fields such as EPS and DPS where these issues are
likely to be problematic, it is often preferable to divide the change in the data field by
Price. This approach, however, dampens growth values for companies with relatively high
share prices (ie expensive companies) which may not be considered desirable.
When calculating growth in a data field over multiple time periods, it makes sense to
reward companies with consistently strong growth and penalize companies with highly
volatile growth from period to period. Put differently, the growth methodology should
favor structural over cyclical growth. Consider two companies which double their EPS over
5 years. If the first company consistently grows its earnings each year while the second
company dramatically increases its EPS the first year only for it to fall over the next three
years before rising slightly, investors are more likely to prefer the first company’s growth
profile.

Growth Factor Characteristics & Observations


i. Growth factors calculated using historical data work better than
growth factors calculated using analyst forecasts
Analysts’ long-term growth forecasts tend to be too high, especially for high growth
companies. There is a tendency, therefore, for companies with high forecast growth
scores to disappoint investors and quite often high future earnings growth is a
contrarian indicator. Companies, however, which have delivered strong historic
growth have demonstrated the robustness of their business models and are less
likely to disappoint investors.

9
ii. Growth factors are typically inversely correlation with value factors
Companies with strong growth profiles usually look expensive based on value factors
using forecast data and nearly always look expensive based on value factors using
historical data. The opposite applies for companies with relatively weak growth
profiles. In theory, a well-constructed DCF model can be used to value high growth
and low growth companies, but this is difficult to achieve in practice as it is hard to
source accurate and reliable data to feed into the model.
iii. By themselves, growth factors exhibit little predictive power
All other things being equal, high growth stocks are obviously preferable to low
growth stocks. The problem is all other things are not equal in that typically the level
of growth is already priced into company valuations. When looking at growth factors
in isolation, this tends to dampen their predictive power.
iv. Growth factors are best used in combination with other factors
Investment strategies which focus on buying companies with strong growth and
value factors are often effective. Short-term trading strategies that combine growth
and momentum factors can also generate excess returns.

10
Momentum Factors
Momentum factors measure whether a company’s share price is going up or down and
the nature of the trajectory.
Momentum factors need to be based on prices adjusted for corporate actions such as stock
splits and rights issues. Dividend payments must also be included in the calculation if the
momentum period includes one or more dividend ex dates (the date investors are no longer
entitled to a dividend payment).
Momentum factors can be constructed which incorporate share price volatility. Big share
price moves for companies with high return volatility are less significant than for companies
with low share price volatility.
Trading volumes also provide a more nuanced view of share price momentum. If a short-
term share price move is accompanied by high trading volumes, it is more likely to be news
driven (eg an earnings surprise), rather than simply a liquidity driven event.
Momentum factors can also be constructed which reference the current share price
relative to the high and/or low share price over the measurement period. Examples
include:

 A price distress factor which measures the percentage difference between the
current share price and the high price over the measurement period.
 The Williams R indicator which measures the difference between the high share
price and the current share price relative to the high and low prices over the
measurement period; this factor is often used as a short-term mean reversion
trading signal
 52-week high relative score measuring the current share price relative to the high
price over the previous 12 months; this is quite a power signal as companies which
hit their 52-week high tend to continue outperforming.
Adding an additional layer of sophistication, it is also possible to construct momentum
factors that exploit the tendency of stocks to outperform or underperform during certain
months of the year. Interestingly, this phenomenon is not just associated with the months
companies report their results. It is a robust anomaly in that it is statistically significant and
it applies across different equity markets. It is possible to take advantage of this
observation by breaking up historical stock performance into its monthly constituents and
weighting each one differently based on past performance.

Momentum Factor Characteristics & Observations


i. Short-term momentum factors tend to be mean reversion trading
signals
This applies to momentum factors calculated over periods ranging from one week to
one month. Companies which outperform over this time-period often underperform
in the short term (and vice versa).

11
ii. Momentum factors calculated over long time horizons, from 3 to 6
years, also tend to be mean reversion signals
Compared to short-term mean reversion factors, the predictive power of these
factors tends to be relatively weak, but the period of mean reversion tends to be
longer (it is measured in months rather than days and weeks).
iii. The most powerful momentum factors measure returns over the
medium term, typically 9 to 12 months
Over this time-period, past winners tend to continue outperforming and past losers
tend to continue underperforming. This is referred to as return autocorrelation. It
occurs due to investors’ tendency to sell shares that have gone up in value to realize
a gain and hold on to shares that have gone down in the hope of recouping the loss.
This is a well-known behavioural bias anomaly which Peter Lynch, in his classic book
One Up on Wall Street, refers to as “watering the weeds and pulling out the flowers”.
Quants have a more boring label for this phenomenon: the disposition effect.
iv. A simple way to combine short-term mean reversion and medium-
term return autocorrelation is to calculate a 12-month momentum
factor that excludes the last month’s return
This factor measures the total return from t-1 month to t-12 months. It is a simple
quant factor but one of the most powerful. In addition to the strong backtest results
across numerous markets, it has strong intuitive rationale underpinned by a well-
known behavioural bias (the disposition effect).
v. Return autocorrelation momentum factors are prone to short and
sharp periods of underperformance
Although these momentum factors (eg total return from t-1 month to y-12 months)
perform strongly on average, there are periods when they dramatically
underperform. This typically occurs around major market turning points (eg when
investors suddenly embrace risk after a prolonged period of risk aversion, such as in
March 2009).
vi. Return autocorrelation momentum factors tend to work better for
small to medium capitalization stocks
Smaller companies have more scope to continue their momentum run than large
capitalization companies. It is much easier for a small capitalization stock, for
example, to increase 10-fold than a large capitalization stock.
vii. Caution should be exercised when trading based on short term
momentum factors given the impact transaction costs can have on
performance
This is one of the key reasons why day traders nearly always lose money. For short
term momentum strategies, transactions costs associated with crossing the bid-ask
spread are particularly problematic.

12
viii. Long term price distress factors can be combined with short term
momentum and analyst sentiment factors to identify potential
turnaround plays
This is one of the most predictive multi-factor quant screens.
ix. Moment factors also exhibit predictive power for sectors
On average, outperforming sectors tend to continue outperforming and vice versa.
Unlike stocks, this also applies to short-term outperformance and
underperformance.

13
Sentiment Factors
Sentiment factors are factors calculated based on analyst data. Analysts provide
recommendations, target prices and financial data estimates (eg EPS and DPS estimates) for
companies they cover. All these data can be used to construct quant factors.
The best-known sentiment factor is Earnings Revisions. Changes in consensus EPS
estimates tend to trend over time and this often leads to trending in share prices. The
intuition and theory underpinning the predictive power of Earnings Revisions factors is
sound and historically quant factors measuring changes in broker earnings forecasts have
performed strongly. This has led to the widespread use of Earnings Revisions factors by
fund managers, either as a key input into their stock selection process (eg for quant
managers) or as an additional piece of information that informs their decision making
process (eg for fundamental managers).
Why do consensus EPS forecasts trend? Analysts are aware of where their forecasts are
positioned relative to their peers and they’re reluctant to stray too far away from their peer
group. Let’s assume the consensus EPS forecast is $1.00 and the highest analyst forecast is
$1.10. If, based on new information, an analyst thinks the company’s EPS may be $1,20,
rather than revising her forecast to this level, she is more likely to move to just above the
highest forecast, say $1.12. if she is wrong, the reputation risks associated with doing this
are less severe than they would be for an outlier forecast. And if she is right, she still has
the highest forecast. Other analysts covering the same company tend to follow the same
herding instincts and this often leads to trending in EPS forecast changes. And given the
market follows EPS forecasts closely, this can lead to trending in stock returns.

Analyst forecasts exhibit herding behaviour

14
In the Growth Factors section, I highlighted the problems associated with using a percentage
change methodology for measuring EPS growth. The same problems apply to measuring
revisions in consensus EPS forecasts. In particular, a small revision in a consensus EPS
forecast which is close to zero can result in an outsized percentage change that does not
reflect the scale of the revision.
A superior approach is to divide the change in consensus EPS by the share price ie (New
Consensus EPS Forecast – Old Consensus EPS Forecast) / Share Price. However, this
approach dampens revisions scores for companies which have a high PE Ratio (and vice
versa).
The best approach is to divide the change in consensus EPS by the standard deviation of
each analyst forecast feeding into the consensus forecast ie (New Consensus EPS Forecast
– Old Consensus EPS Forecast) / Standard Deviation of Analyst Forecasts. This approach is
intuitive as it gives more weight to changes in consensus EPS forecasts when analyst
forecasts are very tightly grouped together (and vice versa). It also results in Earnings
Revisions factors with superior predictive power. The only major problem is at least two
analyst forecasts (and preferably three) are required to calculate the standard deviation.
Rather than solely analyzing change in consensus data, more sophisticated revisions
factors analyze changes in individual analyst forecasts. There is more predictive power in
analyst revisions that move away from the census forecast, eg if the consensus forecasts is
$1,00 and the analyst revises her forecast from $1.10 to $1.20. Analysts are reluctant to
make such bold revisions due to the reputation risks associated with being wrong, so they
typically only do so when they are extremely confident in the rationale underpinning the
revision. We can take advantage of this behavioral bias by giving more weight to these bold
revisions and less weight, or perhaps no weight, to revisions which move back towards the
consensus forecast.

Example of analyst forecast breaking away from consensus

15
Another way to look at earnings revisions is to focus solely on the number of analysts
which have revised their forecasts up or down. This is typically measured using a Net Up
revisions factor methodology representing the number of upward revisions minus the
number of downward revisions over the relevant time frame divided by the number of
analysts covering the company. For example, if there are 5 analysts covering the company
and two analysts revise their forecasts up over the relevant time frame, say 1 month, and
one analyst revises down, the 1 Month Net Up factor score would be +0.2.
Earnings Revisions factors are typically calculated for the next forecast year (FY1) and the
following forecast year (FY2). In many countries and for many companies, analyst revisions
beyond FY2 are either not available or are sketchy. Hence composite Earnings Revisions
factors usually only combine FY1 and FY2 revisions scores. The predictive power of revisions
in FY1 and FY2 forecasts is comparable, so equally weighting revisions for both fiscal years is
appropriate.
In addition to calculating Earnings Revisions based on changes in EPS forecasts, it is possible
to calculate revisions factors for any financial data item where forecast data is available (eg
DPS and Revenue).
Quants also look at changes in analysts’ target prices. Target Price revisions are usually
calculated using a percentage change methodology over different time frames.
The consensus target price can also be compared to the current share price. When doing
this, it is advisable to adjust stale target price forecasts for any move in the overall market
since the target price forecast was made.
Rounding out our suite of sentiment factors is Consensus Recommendation and
Recommendation Revisions. Data vendors collect analyst recommendations and then map
them to a numeric scale. The Consensus Recommendation is the average of each broker
recommendation. To illustrate: if two analysts cover a company and one analyst has a Buy
recommendation and the other analyst has a Hold recommendation, and the vendor’s
recommendation scale is Strong Buy (5), Buy (4), Hold (3), Sell (2) and Strong Sell (1), then
the consensus recommendation for the company would be 3.5. Recommendation Revisions
can be calculated based solely on the change in the consensus recommendation over the
relevant time-period.
Revisions factors can be calculated over different time periods. When calculating revisions
factors over relatively long time periods, it makes sense to assign more weight to more
recent revisions. This is intuitive and improves the predictive power of the factors.

Sentiment Factor Characteristics & Observations


i. Analyst recommendations exhibit little predictive power
Broker recommendations by themselves are not useful predictors of share price
returns. In particular, buy recommendations are not good indicators of future
outperformance. Sell recommendations are much rarer than buy recommendations
and tend to have more information content. At the very least, a company with a
consensus sell recommendation should be subjected to more detailed analysis to
16
understand why it is being shunned by analysts.

Buy Recommendations aren’t good indicators of outperformance

ii. Analyst revisions factors are relatively strongly correlated with price
momentum factors
Indeed, many quants categorize analyst revisions factors as momentum factors. The
short-term predictive power of analyst revisions factors, however, tends to be far
more potent than price momentum factors.
iii. The predictive power of analyst revisions factors decays relatively
quickly
This particularly applies to analyst revisions factors measured over short time
periods. The rate of decay has also tended to increase in recent years. This may
reflect the fact that more investors are monitoring the change in analyst forecasts.
iv. The amount, and arguably the quality, of analyst research is
declining
Various industry forces have conspired to make the production of broker research
less profitable than in the past. This is particularly problematic in the European
Union which has enacted laws (MiFID II) which ban banks and stockbrokers from
combining charges for research and transactions. As a result, stockbroking firms are
allocating fewer resources to the production of company research and forecasts.
This has potentially affected the usefulness of the data feeding into sentiment
factors, thereby reducing their efficacy.

17
Certainty Factors
Certainty factors measure the historical stability and the forecast variability of financial
data. Companies with low certainty scores tend to be highly cyclical in that their fortunes
are dependent on the macroeconomic environment, while stocks with high certainty factor
scores tend to be relatively defensive.

For historical data, regression analysis can be used to determine the “goodness of fit”
against a trend line for the relevant time series data (eg historical EPS). Different
statistical techniques can be used to perform this calculation. Adjustments must also be
made for whether the data is trending up or down as consistently rising earnings are
obviously preferable to consistently declining earnings.
For forecast data, the variation of analyst forecasts, as measured by the standard
deviation, can be used to measure certainty. The standard deviation of forecasts can be
divided by the mean consensus forecast or the latest share price. When dealing with
financial fields that can be negative or close to zero (eg EPS), it is preferable to divide by
price.

Certainty Factor Characteristics & Observations


i. Certainty factors combine well with value factors
Companies with highly uncertain earnings prospects can be potential value traps.
Excluding or reducing the weight assigned to these companies can improve the
effectiveness of value biased investment strategies.
ii. Dividend Yield and Dividend Certainty can be used to identify “bond
proxy” companies
Bond proxies are companies which potentially provide a stable and secure income
stream (like bonds) but with a higher yield. In a low interest rate environment where
many sovereign bonds offer negative or extremely low yields, these companies are
potentially attractive to relatively risk-averse investors.
iii. When there is a wide dispersion of analyst forecasts, the mean
consensus forecast tends to be too high
Analysts like to make bullish forecasts as this ingratiates them with the company’s
management. When there is a wide dispersion of forecasts, it is easier for them to
act on this inclination as the reputation risks associated with an incorrect forecast
are less severe than when analyst forecasts are tightly grouped together. Hence
when there is a wide dispersion of forecasts, the mean consensus forecast tends to
be too high.

18
Debt Safety Factors
Debt safety factors measure the amount of debt a company has on its balance sheet and
the company’s ability to service its debt.
A company’s long-term debt obligation is usually measured as interest bearing debt and
excludes operational debt such as accounts payable. Net cash holdings should also be
deducted from the company’s debt. Net interest-bearing debt can be compared to Total
Assets or Net Assets (after deducting liabilities).
Changes in debt levels indicate whether a company is financing its operations and growth
with external debt or other sources such as retained earnings. Again, this data can this data
is typically divided by Total Assets or Net Assets.
Other ratios than measure a company’s assets relative to its liabilities can also be calculated.
These can also focus on short term debt obligations (eg Current Ratio = Current Assets /
Current Liabilities).
Various ratios can be calculated that measure a company’s ability to service its debts, the
most common being Interest Coverage (ie EBIT / Net Interest Expense). A more robust
approach to calculating income relative to interest payments uses forecast and/or historical
earnings over several years to dilute the impact of a temporary decline in earnings.

Debt Safety Factor Characteristics & Observations


i. Different debt safety factors are appropriate for banks
For banks, it make sense to focus on bad debts and the size and nature of the banks’
capital to support its loans. Capital is “tiered” based on the nature of the funding
source which is then used to determine capital adequacy.
ii. Debt safety factors aren’t useful by themselves as predictors of
share price returns
Debt safety factors are primarily used to assess companies’ risk profiles. They are
particularly relevant when analyzing price distressed companies and deep value
stocks.
iii. In some countries, a relatively large proportion of companies have
substantial net cash holdings relative to their market capitalizations
This tends to be more common in Asia, particularly Japan. During severe market
dislocations when all stocks tend to fall at the same time, investors can screen for
companies that have large net cash holdings as the downside protection is
temporarily ignored by the market.

19
Backtesting Quant Factors
Once you come up with a new quant factor you need to decide whether it’s any good.
Making this determination isn’t straightforward. There are lots of questions you need to
answer:

 Is the factor better at predicting short term or long-term returns?


 How consistently does the factor perform?
 Does the factor generate high turnover costs which could erode after-cost
performance?
 Is the factor better at identifying long or short opportunities?
 Does the factor work because it favors certain sectors which have outperformed
over the backtest period?
There is no single backtesting strategy which answers all these questions. Similarly, there
isn’t a single performance metric which can be used to decide whether one factor is better
than another.
The best approach is to use different backtesting strategies and calculate a range of
performance metrics. So long as you know the pros and cons of each strategy and what the
different performance metrics represent, you can form a complete picture of how each
quant factor performs. You can then use this information to build a robust stock selection
methodology.
In broad terms, the different backtesting strategies involve either correlation analysis or
portfolio simulations. I discuss both, starting with simple approaches and then adding
additional complexity.

Correlation Analysis
Univariate Analysis
Univariate correlation analysis can be used to evaluate the strength of the relationship
between an independent variable and dependent variable. For the purposes of
backtesting quant factors, the independent variable is the factor score and the dependent
variable is stock performance. We want factors which have consistently strong return
correlations over time.
The correlation coefficient is a measure of the strength of the relationship between the
relative movement of the two variables (in our case, the factor scores and stock returns).
This measure varies from +1 (perfect positive correlation) to -1 (perfect negative
correlation).
Due to the way in which the calculation is performed – the deviation from the “line of best
fit” is squared – outliers can have a disproportionately large impact on the analysis. This
applies to both the factor score and stock return data.

20
A simple, intuitive, and effective way of dealing with outliers is to rank all the companies
from best to worst based on both the factor score and stock return. We can then calculate
the correlation coefficient for the two sets of rankings.
Assuming we have a monthly history of factor scores and stock returns, the process
involves:

 Ranking the companies from best to worst based on the factor score and the stock
return for each month over the 10-year period
 Calculating the correlation coefficient for both sets of rankings for each month (Rank
IC)
 Calculating various statistics based on the monthly history of Rank ICs.
The key performance statistic is the average Rank IC. This is a commonly used measure of
the overall predictive power of quant factors.
It is also important to consider how consistently the factor predicts future stock returns.
A simple way of doing this is to measure the success rate, ie percentage of months where
the Rank IC is greater than 0. A more robust measure is the t-stat of the monthly Rank ICs.
This is calculated by taking the average Rank IC, dividing it by the standard deviation of the
monthly Rank ICs, and multiplying this number by the square root of the number of months
over the backtest period. This statistic enables us to calculate how statistically significant
the correlation is between the factor scores and stock returns. As a rule-of-thumb, a t-stat
greater than 2 indicates we can be very confident the relationship is statistically significant.
IC Decay Profiles
It is important to consider the strength of the factor’s relationship with stock returns over
different investment horizons. Some factors exhibit strong short-term predictive power
that quickly decays. Other factors perform relatively poorly over the short-term but their
predictive power only dissipates slowly over time.

Factor decay examples


This is an important consideration as it impacts turnover and transaction costs. Factors
with rapid decay profiles require more turnover to keep the portfolio fresh and hence incur
higher transaction costs.

21
The process of calculating monthly IC decay profiles is simple:

 Calculate the series of monthly Rank ICs, as detailed above


 Perform the same process multiple times, lagging the stock return by one month
each time; typically, this process is done for stock returns up to 12 months from the
run date
 Average each Rank IC series.
A well behaved quant factor will have a relatively strong correlation between the factor
score and the next month’s return which then gradually declines over each month (eg the
correlation between the factor score and two month stock returns is slightly lower than the
correlation with one month stock returns).
Earnings revisions factors and short-term momentum factors tend to have rapid decay
profiles. At the other end of the spectrum, value factors tend to be “slow burn” in that their
predictive power only gradually dissipates over time.
Multivariate Correlation Analysis
One potential problem with univariate correlation analysis is factor returns can result
from risk tilts, rather than a pure exposure to the factor. This concept sounds complex but
is actually very simple.
Let’s assume that a factor performs strongly but it is heavily biased toward technology
companies and technology companies have materially outperformed other sectors over the
backtest period. The factor may have performed strongly because it tends to pick
technology companies and if technology stocks do not continue to outperform, the factor
may not work as strongly as the backtest results suggest.
Within countries, the key risk factors are sector and size. There can be prolonged periods
when some sectors outperform or underperform and when small caps stocks outperform
large cap stocks (and vice versa).
Multivariate correction analysis provides an elegant way to neutralize risk tilts. Rather
than simply regressing factor scores against stock returns, we add additional independent
variables for each risk factor. Given sector and size are the key risk factors, we can add
independent variables for each GICS1 Sector (GICS is a commonly used sector and industry
classification taxonomy) and market capitalization. Each sector will be either 1 (if the stock
is in the sector) or 0 (if it isn’t in the sector). The other variables should be normalized with
a mean of 0 and a standard deviation of 1.
The regression coefficient for the factor being tested is known as the pure factor return.
Using technical jargon, this represents the return from a one standard deviation exposure to
the factor after controlling for or neutralizing the risk factor exposures.
It is also possible to use the same approach to measure whether a new factor exhibits
predictive power after allowing for risk factors and other return factors. If, for example,
you develop a new value factor, you could include all other value factors as independent

22
variables in the regression analysis to see if the new factor exhibits explanatory power
above and beyond the existing factors.

Portfolio Simulations
Conceptually it is difficult to relate to the various factor performance statistics generated
from correlation analysis. How does a Rank IC of 5% correspond to real-life performance?
To answer this question, we need to adopt a different approach to measuring factor
performance. This involves constructing simulated portfolios based on the relevant quant
factor and measuring the portfolios’ performance.
As always when it comes to measuring factor performance, there are lots of ways of doing
this. The simplest approach is to:

 Rank the stocks each month over the backtest period by the relevant factor
 Construct a notional long portfolio that comprises the highest ranked companies;
this could be, for example, the top quartile of companies
 Construct a notional short portfolio that comprises an equal number of lowest
ranked stocks
 Assuming an equal weighting methodology and monthly rebalancing, measure the
monthly performance of the notional long and short portfolios.
We can then analyze the performance of the notional long portfolio versus the notional
short portfolio. The performance consistency can be measured using success rates and t-
stats, as per the discussion on Rank IC performance.
We can also separately measure the performance of the notional long and short portfolios
versus an equal weighted market benchmark. Herein lies one of the key advantages of this
type of portfolio simulation: it is possible to measure how well the factor works at both ends
of the return spectrum. Some factors work better for identifying long (buy) opportunities
while other factors work better for identifying short (sell) opportunities. For example,
consensus broker recommendations exhibit more predictive power when identifying short
opportunities than long opportunities. This is likely due to analysts reluctance to issue sell
recommendations which are far rarer than buy recommendations.
In real life, equal weighting portfolios and rebalancing portfolios every month based solely
on the new set of factor scores isn’t realistic. We can incorporate more realistic portfolio
construction and rebalancing constraints into the notional portfolios to potentially generate
more indicative performance statistics. For example, we can:

 Weight position sizes based on the factor score and stock liquidity (eg give more
weight to companies with high factor scores and high liquidity)
 Adjust position sizes more regularly and in a more subtle way that minimizes
turnover and transaction costs
 Impose sector and other risk constraints so that portfolio performance is not driven
by outsized risk bets.

23
The disadvantage of doing this is the overall exposure to the quant factor in the notional
portfolios will vary significantly based on the various assumptions that are made and
constraints that are imposed. For example, if tight rebalancing constraints are incorporated
into the backtest simulations, the exposure to the relevant quant factor will decline during
periods when the quant factor scores change significantly. The choice of portfolio
construction and rebalancing constraints will also have a large bearing on backtest
performance which can make interpreting the performance data difficult.

In-Sample & Out-Of-Sample Testing


When you review backtest results, you will invariably identify some deficiencies. Maybe the
results aren’t as consistent as you would like, or the overall predictive power is less than
desired. This is to be expected given it is not easy to identify factors that perform strongly.
The natural reaction is to tweak the factor methodology to get better results. And if these
changes do not yield the results you want, you can keep tweaking!
The problem is if you test enough variants of a factor, you are likely to find something that
works by chance. This applies not only to backtesting factors but also quant strategies.
One way to address this issue is to develop and tweak your factors using one data set and
then test the factors using a totally separate data set. This is referred to as out-of-sample
testing.
Out-of-sample testing can be performed within the same market or in a different market
(or markets). Using the first approach, it is preferable to use multiple out-of-sample periods
over the backtest window rather than select one continuous time-period. This is because
market dynamics change over time and markets tend to become more efficient. Hence, a
factor which worked strongly over the first few years of the data set may not exhibit as
much predictive power over the later part of the backtest period.
Out-of-sample backtesting in the same market requires a lengthy data set. Preferably, both
in-sample and out-of-sample testing should be performed over different market cycles.
Out-of-sample testing in a different market works well if both the in-sample and out-of-
sample markets are comparable in terms of their size, composition and maturity. It would
not be valid, for example, to test a factor in-sample in a large developed market and then
perform the out-of-sample testing in a much smaller emerging market. In Europe, this tends
to be less of an issue given there are a relatively large number of developed markets that
are similar and even share the same currency. If you test a factor in-sample in Germany and
it works well out-of-sample in France, Denmark, Netherlands and Belgium, you can be
confident that the backtest results are substantive rather than the product of a data mining
exercise.

24
Lookahead Bias
A lookahead bias occurs when you use data in your
backtest that was not available at the run date. This
issue is particularly pertinent when using factors that
have been calculated using data from a company’s
financial statements. There is a considerable lag
between a company’s period end and when it reports its
results. This lag varies by market and over time
(companies now tend to report their results closer to
their period-end than in the past).
If your backtests assume a company’s financial data is
available immediate after its period-end, you will have a
considerable lookahead bias that will corrupt your
results. If it’s not possible to source the date the data
was made available, a conservative time lag should be
incorporated into your backtests.
A lookahead bias can also corrupt backtest results when financial data is revised. This
occurs when a company restates its financial results.
The best way to avoid a lookahead bias is to use point-in-time data which includes the date
the data was released. Some data vendors provide this data in their historical data sets.

Short Sale Availability & Cost of Borrow


Any quant backtest that includes shorting stocks should incorporate an historical inventory
of stocks which can be shorted and the cost of shorting those stocks.
This data can be difficult to source. Often assumptions need to be made and it is important
these assumptions are conservative to ensure that the backtests do not overstate the
performance of the short portfolio.

Survivorship Bias
A survivorship bias arises when investable companies that should be included in the
backtest data are excluded because they have been de-listed or don’t currently satisfy the
selection criteria. This could occur, for example, due to corporate actions (eg takeover,
merger) or bankruptcy.
This issue is particularly relevant when testing momentum factors and strategies. Let’s
assume, for example, that you’re testing a price distress factor and the backtest data
excludes bankrupted companies. In this instance, the backtest results will be heavily biased
as a result of excluding price distressed stocks that not only don’t rebound but result in a
100% loss.

25
Combining Quant Factor Scores
Normalizing Factor Scores
Before combining quant factor scores, they need to be normalized using a common scale.
This is important because raw factor scores vary widely depending on the:

 Factor calculation methodology (eg Earnings Revisions normalized by Price versus


Earnings Revisions normalized by Forecast Standard Deviation),
 Data feeding into the calculation (eg Dividend Yield vs Book Yield), and
 Time-period over which the factor is calculated (eg 1 Week Total Return vs 5 Year
Total Return).
Factor scores can be normalized using either a uniform distribution or normal distribution.

Normal Distribution versus Uniform Distribution


Ranking factor scores from best to worse results in a uniform distribution. The advantage
of this approach is its simplicity. It is very easy to rank raw scores and no adjustments are
required for outliers. This main disadvantage of this approach is changes in normalized
scores are disproportionately large for changes in raw factor scores that vary slightly from
the mean.
To generate a normal distribution, it is necessary to calculate z scores, which represent the
number of standard deviations the raw score is from the mean. With this approach, it is
important to deal with outliers as they can distort the distribution of normalized scores.
This can be done by capping extreme scores. This is known as winsorizing. This can be
done several times to generate a normal distribution of scores.
My preference is to winsorize the raw factor scores to generate standardized scores. This
approach mitigates the impact of outlier scores and appropriately adjusts factor scores for
changes centered around the mean.

26
Weighting the Normalized Scores
Factor scores should be weighted based on their predictive power and the correlation
between the factor returns.
Consider a simple example with 3 factors: Factor A, Factor B and Factor C. Factor A has the
strongest predictive power, followed by Factor B and then Factor C. Intuitively, it makes
sense to weight the factor scores accordingly, giving the most weight to Factor A and the
least weight to Factor C. But let’s assume that Factor B’s returns are highly correlated with
Factor A’s, while Factor C’s returns are uncorrelated with either factor. In this case, it makes
sense to reduce Factor B’s weight and increase Factor C’s weight to avoid double counting.
In an extreme case where Factor B’s returns are almost 100% correlated with Factor A’s and
Factor C’s returns are almost completely uncorrelated with the other factors, we would
reduce the weight of Factor B to zero.
To adjust for factor return correlations, each factor pair must be analyzed. The historical
performance correlations based on regressing factor scores versus forward stock returns
(refer to the Factor Backtesting section) can then be regressed against each other for each
stock pair. An adjusted measure of factor performance can then be calculated and used to
determine the appropriate weights to apply to the normalized factor scores.

Exploiting Factor Combinations


Some factors work together in ways that cannot be fully captured with a simple linear
weighting scheme.
Consider value and certainty factors. A very cheap company that has extremely poor
earnings certainty may not be as attractive as a simple weighted average composite score
would suggest. Earnings certainty factors typically have little predictive power when
analyzed in isolation and hence would have a small weight in the composite score.
However, its predictive power is magnified when combined with extreme value scores. How
do we take advantage of this observation?
One approach is to run a stock screen that includes a hurdle for earnings certainty when
identifying potential value plays. A stock could be extremely cheap but if it doesn’t pass the
earnings certainty hurdle(s), it won’t be selected. This approach is binary: either the
company passes the screen or it doesn’t.
This approach may be suitable for fundamental investors who want to whittle down a large
universe of companies to a smaller list which are worthy of further examination.
Quants prefer to encapsulate all available information in a single score which represents
the strength of the investment opportunity. This can be achieved by adjusting the factor
weights for specific factor combinations. In addition to value and certainty factors, these
include:

 Value and Growth


 Price Distress and Short-Term Momentum

27
 Momentum and Growth.

Putting It All Together


The process of combining quant factors involves:

 Normalizing the raw factor scores


 Calculating the predictive power of each factor
 Calculating the factor return correlations for each factor pair
 Adjusting for specific factor combinations.
This can be done formulaically. While this is a theoretically pure approach, it is not without
its shortcomings:

 What factor return methodology do you use? As discussed, there are several ways
to measure factor performance.
 What return measurement time horizon do you use? Some factors exhibit strong
short-term predictive power, but it decays rapidly. Other factors generate relatively
poor short-term returns but their predictive power persists for several months.
 Statistical methods for adjusting for factor return correlations can result in
counter-intuitive weights. For example, factors that work well in isolation can be
assigned negative weights. At the other end of the spectrum, excessively strong
weights can be assigned to factors that have relatively poor predictive power but are
statistically uncorrelated sources of excess returns.
Alternatively, you can have an experienced practitioner determine the factor weights
after reviewing the relevant empirical data and imposing a common-sense overlay. This is
my preferred approach.

28
Portfolio Construction and Rebalancing
Portfolio construction and portfolio rebalancing are very important components of the
overall investment process.
Many quants don’t spend enough time working out how to properly construct and
rebalance portfolios. Often a lot of time and effort goes into working out what quant
factors to use and how to combine the factors to build a robust stock selection process –
only for the good work to be undone with poorly thought out and an overly simplistic
portfolio construction and rebalancing process.
For example, taking an equal weight in the top 10 companies based on your stock selection
model and then rebalancing the positions every month isn’t going to maximize risk-adjusted
performance. Unfortunately, simple approaches like this are often adopted by so-called
smart beta funds – and it is one reason why many are poor investments (refer to the
Appendix on Smart Beta for more information).

Portfolio Construction
In the world of quant factor investing, portfolio construction is all about maximizing
exposure to return factors while minimizing exposure to risk factors and transaction costs.
The return factors are the quant factors, such as value factors and earnings revisions factors,
which I discussed previously.
Risk factors, like return factors, explain stock returns but unlike return factors they are not
positively correlated with stock returns over time. If you invest across countries, the main
risk factor is country exposure. There are periods of time when some countries consistently
and materially outperform others.
Within countries, the biggest risk factor is sector. Relative sector outperformance and
underperformance can be significant and persist for prolonged periods. However, over the
long haul, sectors tend not to be consistently positively correlated with stocks returns as
periods of outperformance are often followed by periods of underperformance (and vice
versa).
After sector, size is the most important risk factor. The relative performance of small cap
versus large cap stocks can be significant.
One technique for constructing quant portfolios is to use an optimizer. These are
programs which use different statistical techniques to find a “solution” which maximizes the
exposure to a composite return factor for a given level of risk. The different techniques
used by optimizers (eg quadratic programming, principal component analysis) are beyond
the scope of this book.
Using an optimizer sounds great in theory. Unfortunately, there is a wide gulf between
what works in theory and what works in practice. The inner workings of optimizers are
predicated on numerous simplifying assumptions which aren’t valid in real life. This is one
of the reasons why it is important to impose constraints on the optimization process to get

29
realistic solutions. And herein lies another problem with optimizers: changing these
constraints can result in vastly different portfolios and it’s often not possible to determine
with any precision why this is the case. Similarly, slight changes in the return factors and
risk model can result in portfolio changes which aren’t always intuitive. In many ways, an
optimizer is a black box.

Optimizers can be a black box


Another issue with optimizers is risk factors can change and temporary risk factors, which
could not have been foreseen when the risk model was calibrated, can have a significant
impact on stock returns. A good example is Covid-19. The virus was the key share price
driver in 2020 but the impact it had on markets could not be predicted based on historically
calibrated risk models.
An alternative approach is to construct portfolios manually. Positions can be sized based
on the strength of the return factors, while risk factor exposures are monitored and
constrained manually. For example, if a company rates strongly based on its return factors
scores but a large position would exacerbate an existing risk bias, the position size can be
manually dialed down. The largest positions will be taken in stocks which have strong quant
factor scores and risk characteristics that will improve the portfolio’s risk profile.
When constructing portfolios manually, a large number of positions is often required to
balance the portfolio across the sector and size risk dimensions. Breadth is also a very
important concept in quant as return factors have limited predictive power. This issue is
discussed in more detail in the Quant Pitfalls section of this book.

30
Rebalancing
Rebalancing is all about maintaining the appropriate return and risk factor exposures
while minimizing transaction costs and maximizing after-cost and after-tax returns.
Minimizing transaction costs doesn’t mean reducing trading frequency. Small frequent
trades are preferable to relatively large and infrequent trades. Small trades help mitigate
market impact costs (refer below). Frequent trades also help you trade based on the most
up-to-date data - so long as you have safeguards in place to protect against trading “noise”.
Trading based on noise rather than information content can occur when changes in quant
factors are driven by stale data. This can be particularly problematic for value factors.
Consider a company which has just reported a poor result. The company’s share price will
decline immediately but there will be a lag before the company’s historical financial data is
updated in your dataset. In the interim, the company will look more attractive based on
value factors using historic data. This lag is also an issue for value factors based on forecast
data as analysts will not update their forecasts straight away and often there is a lag
between when analysts update their forecasts and when the forecasts are submitted to
data vendors. Given this, it is advisable to review changes in quant factor scores after a
major event such as an earnings surprise. It is also advisable to use a data vendor that
provides the most up-to-date data.
The most important consideration when rebalancing a quant portfolio is transaction costs.
There are several components to transaction costs:

 Brokerage – Brokers typically charge a fee for executing trades for investors. The
good news is brokerage charges have declined significantly over the years. And
some brokers in the United States no longer charge a brokerage fee.
 Bid-Ask Spread – If you want to sell immediately, you need to sell at the highest bid.
Conversely, if you want to buy immediately, you need to buy at the lowest offer
price. The difference between the highest bid and the lowest offer is called the bid-
ask spread. The spread tends to be inversely correlated to stock prices and stock
liquidity. It also varies by country based on the relevant stock exchange rules
governing minimum share price increments. In Hong Kong, for example, minimum
share price increments are relatively large.
 Market Impact – Whenever you buy and sell shares, you impact the share price. If
the trade is small relative to the trading volumes for the relevant stock, the market
impact (ie the impact on the share price) will be immaterial. Conversely, if the trade
is large relative to trading volumes, the market impact will be significant. Consider
the case where you want to buy 10,000 shares in a company but there is only 1,000
shares on the lowest offer price. It is likely you will have to push the share price
beyond this level to execute the trade. You could be patient and not trade above the
highest offer but you then risk having to buy later at a higher price. This is referred
to as implementation shortfall. For many institutional investors, market impact and
implementation shortfall costs are by far the biggest component of transaction costs.

31
 Stamp Duty – Stamp duty charges vary by market. In some countries, there are no
stamp duty charges (eg United States). In other countries, stamp duty charges are
significant (eg United Kingdom) and materially impact the cost of executing share
trades.
Rebalancing involves a trade-off between the opportunity cost of not trading and the cost
of trading. The opportunity cost of not trading depends on the magnitude of the change in
quant factor scores. The larger the change, the greater the potential opportunity cost
associated with not doing anything.
The cost of trading represents the transaction costs that will be incurred as well as any
impact a trade may have on after-tax returns. In some markets, capital gains tax discounts
are available for investments which are held for a pre-determined length of time (eg United
States and Australia). As a result, trading a stock may impinge on performance after all tax
obligations are accounted for.
Rebalancing may also be required to address portfolio risk imbalances. Exposure to risk
factors obviously change based on the trades which are executed. If you have a net long
exposure to technology stocks and you increase your position sizes in these stocks, your net
long bias will increase. What is less well understood is that exposures to risk factors can
change solely due to relative share price moves. For example, if you are net long technology
stocks and the technology holdings in your portfolio significantly outperform your other
holdings, the size of the net long position will increase. A risk bias may morph into a risk bet
which requires remedial action solely due to share price movements.

32
Factor Timing
Not all quant factor families work all the time. One defense against factor
underperformance is factor diversity. For example, if value factors don’t work, hopefully
other factors will more than compensate for the performance detraction.
Given the divergence in factor performance can be significant, it is tempting to adjust factor
weights to capture the potential upside from factor outperformance.

Value vs Momentum
The key factor performance drivers are typically value and momentum factors. For the
purposes of this analysis, momentum factors include what many quants refer to as earnings
momentum and which I refer to as earnings revisions in this book. These factors tend to
have a relatively high weight in a multi-factor quant stock selection process and their
performance is often inversely correlated. In other words, when momentum factors
perform strongly, value factors often detract from performance (and vice versa). Hence any
debate on factor timing tends to focus on these factors.
Adjusting the relative weights assigned to value and momentum factors incurs additional
transaction costs. Consequently, if any factor outperformance is fleeting, it is unlikely that a
factor timing model will improve after-cost returns.
The good news – from a factor timing perspective – is there are often prolonged periods
when the performance difference between momentum and value factors is significant.
For example, in the late 1990s momentum factors massively outperformed. This was
followed by a period spanning almost 7 years when value factors gained the upper hand.
More recently, since 2010 momentum factors have exhibited the strongest predictive
power.

Potential Problems with Factor Timing Models


One significant potential problem with factor timing models is over-fitting (or data
mining) due to the lack of “breadth”. Even with a long data history, there are a limited
number of periods when there has been a regime change that significantly favors value over
momentum and vice versa. Any model tweaking therefore that successfully predicts one of
these periods will significantly alter the backtest results.
Structural changes – such as the emergence of the Internet, as well as the decline in
interest rates and bond yields – which potentially drive the relative performance of value
and momentum factors also complicates matters. The Internet has structurally benefited
some companies which enjoy significant barriers to entry and economies-of-scale. Low
interest rates also make these companies – which can potentially grow their earnings at a
supernormal rate – more attractive as a lower discount rate is applied to future profits.
Given this, a factor timing model calibrated during a period when interest rates were more
than 10% and the internet was only known to a small number of boffins is unlikely to exhibit
any predictive power in today’s market environment.

33
Factor Return Persistence
I mentioned that there have been prolonged periods when value has outperformed
momentum and vice versa. This observation is supported by backtest results with value and
momentum factor return autocorrelation being statistically significant across global equity
markets.
Given this, it makes sense to favor factors that have recently been working in the
expectation this outperformance will continue.
It is always advisable to question the logic or intuition underpinning any empirical finding.
This can help determine the likelihood of “history repeating” and hence the usefulness of
the backtest results. Potentially any regime change that supports the relative
outperformance of factors is likely to continue for an extended period as policy makers
don’t like the idea of flip-flopping. Further, investors like chasing themes that work and are
unlikely to give up on a winning strategy. For example, investors who are profiting from
momentum trades are likely to continue with this strategy, thereby fueling further demand
for momentum stocks.

Factor Performance Reversals


Factor performance backtest results also show that reversals in factor performance can be
sudden and severe. This occurred, for example, in the year 2000 when investors’
infatuation with technology stocks suddenly ended. This resulted in a severe reversal in the
performance of momentum and value factors. Similarly, following a period of extreme risk
aversion that continued through 2008 and up until mid-March 2009, investors suddenly
embraced risk and the relative performance of momentum and value factors suddenly
flipped.
Predicting when the relative performance of factors is likely to change is probably more
art than science. Quants don’t like this concept, preferring to let the numbers drive the
decision making process.
One thing that quant investors can look at is the characteristics of the best and worst
stocks ranked by a particular factor. In early 2009, for example, stocks with strong
momentum were predominantly from defensive sectors such as Utilities and Consumer
Staples, while stocks with poor momentum were predominantly from highly cyclical sectors
such as Financials. This type of bias is potentially risky as any reversal in market sentiment
can lead to a sudden reversal in factor performance.
Quants typically focus on the valuation attributes of stocks when analyzing the risks posed
by factor exposures. Cliff Asness - co-founder of AQR – often references factor valuation
spreads.
When the valuation spread between the best and worst stocks ranked by a particular
factor is extremely wide relative to historical values, this can be a warning sign. If, for
example, stocks with strong momentum look incredibly expensive compared to stocks with
poor momentum, investors should question how much more expensive the high

34
momentum stocks can get. Similarly, if stocks which are cheap look incredibly cheap
relative to stocks which are expensive, this may be a sign that the underperformance of
value factors has run too far. Think of it as an elastic band which is stretching wider and
wider. Eventually it will reach a breaking point. The challenge is trying to work out where
that point is.

Factor Timing using Calendar Anomalies


A calendar anomaly is a mispricing anomaly that is associated with a day of the week,
time of the month or time of the year.
The best-known calendar anomalies are tax loss selling, window dressing and the January
effect. All three have implications for the performance of value and momentum factors.
Tax loss selling to the tendency of investors to hold on to winners to avoid crystalizing a
capital gain and to sell losers to realize a capital loss towards the end of the tax year. This
boosts the performance of momentum factors.
Window dressing occurs at the same time of year. It refers to fund managers’ tendency to
improve the appearance of their portfolio holdings before they are published by selling
underperforming companies and buying outperforming companies. This also benefits
momentum factors.
The January effect refers to investors tendency to embrace risk at the start of a new year.
This can result from investors reversing trades based on prior tax loss selling and window
dressing. At the start of a new year, investors can also buy underperforming value stocks
and still have time for any potential turnaround to materialize before the end of the year.
The end-result is that at the start of a new year value factors tend to perform strongly
while momentum factors underperform.
The last point to make about the January effect – and all calendar anomalies – is they’re
easy for investors to monitor and exploit. Hence the likelihood of any excess factor returns
being arbitraged away is greater than usual. There is already evidence that the weight of
money seeking to exploit the outperformance of momentum factors in December and value
factors in January has dampened the performance impact. Indeed, based on recent
empirical data, it appears that investors need to be positioned for the January effect in late
December and that the outperformance of value factors only occurs over a few days at the
start of the year.

Summary
Timing the performance of value and momentum factors based on calendar anomalies has
strong intuitive appeal and is a valid strategy.
Conversely, tactical factor timing models should be viewed with extreme caution. The
challenges posed by over-fitting and market structural changes are difficult to overcome.

35
A potentially strategy is to review recent factor family performance and then analyze the
characteristics of the best and worst stocks ranked by each factor. If a factor family has
been outperforming and the valuation spread for that factor still looks favorable relative to
history, it may be a candidate for taking an overweight position.
Alternatively, investors can maintain strong factor diversity with static factor weights,
preferably using a robust stock selection process that has been properly calibrated based
on in-sample and out-of-sample backtests. These weights can then be adjusted at the
margin to exploit calendar anomalies. This is a simpler approach that may suffer from
periods of underperformance, but should deliver strong risk-adjusted performance over
time.

36
Event Anomalies
So far, I have focused on quant factors that apply to all companies at all times and are
(approximately) normally distributed. There are other quant factors that also exhibit
predictive power but can only be calculated when specific events occur. These events are
earnings surprises, stock buybacks, dividend ex-dates, and insider trades.

Earnings Surprises
Following positive earnings surprises, companies tend to continue outperforming, and
following negative earnings surprises, companies tend to continue underperforming. This is
known as post-earnings-announcement drift.
The most common way of measuring earnings surprises is to divide the difference
between reported EPS and the consensus forecast EPS by the standard deviation of
analyst estimates. This is known as Standardized Unexpected Earnings. This approach is
intuitive in that it gives more weight to earnings surprises when analysts’ forecasts are
tightly grouped together (and vice versa). It also generates more robust signal scores than
percentage changes which can be very noisy, particularly for companies where either the
reported EPS or the consensus forecast EPS is close to zero.
One problem with comparing reported EPS with the consensus forecast EPS is it ignores
earnings quality. To illustrate: the reported EPS may be higher than analysts’ estimates but
if it is driven by a decline in the company tax rate, rather than higher sales, it may be
deemed to be a poor quality beat and not a true earnings surprise. Alternatively, a company
could report a strong result, but the outlook commentary may be negative which tempers
the positive surprise.
An alternative approach is to measure the market’s reaction to an earnings
announcement (ie whether the share price outperforms or underperforms). This can be
done over a three-day window before and after the announcement date. It is important to
note that in many cases companies report their results after the close of trade and that the
three-day measurement window needs to be adjusted accordingly.

Stock Buybacks
A stock buyback occurs when a company purchases its own shares. Buybacks can be
conducted on market and off market. Off market buybacks are relatively rare and are
usually conducted for company specific reasons that are difficult to model. Hence, quants
typically focus on buybacks which are conducted on the open market.
The potential advantages of stock buybacks are numerous:

 Signaling – A buyback indicates that the company’s management believes its share
price is undervalued.
 Increased demand for shares – Buybacks increase demand for the company’s shares
which exerts upward pressure on the share price.

37
 Tax advantages – Buybacks offer tax advantages in countries where capital gains are
taxed at a lower rate than dividends (eg United States).
 Increase in EPS – Buybacks reduce the number of shares on issue and, so long as the
company’s return on its shareholder equity exceeds its cost of funding, will increase
earnings per share. In a low interest rate world, this is almost always the case.
The potential advantages of buybacks are clear. However, buybacks are only useful
predictors of future share price outperformance if the market underreacts to the buyback
announcement. Empirical evidence indicates this is the case. Like all market anomalies,
however, the more investors become aware of this mispricing opportunity, the more likely it
will be arbitraged away.
It should also be noted that the rules governing stock buybacks vary by country. In some
countries, companies can announce a buyback but not actually purchase any shares. In
other countries, share purchases are only announced after they have been executed.

Dividend Ex-Dates
A dividend ex-date is the date that investors are no longer entitled to a dividend payment.
The share price fall on the dividend ex-date relative to the size of the dividend is known as
the dividend drop-off. This ratio should be adjusted for the overall market move to
determine if there are any mispricing opportunities that can be exploited immediately
before and after the dividend ex-date.

If the dividend drop-off is below 100% (after adjusting for market moves), investors can
potentially buy just before the dividend ex-date and sell on the dividend ex-date. As with
any short-term trading strategy, transaction costs are potentially problematic.
Another issue is tax. In many countries, dividends are taxed at a higher rate than capital
gains. Overseas shareholders typically also pay withholding taxes on dividend payments.
Complicating matters further, in some countries, dividend payments can include tax credits
(eg Australia) but there are rules in place to prevent investors from being able to use these
credits for short-term trades.
Fortunately, there is another dividend ex-date mispricing opportunity which is easier to
exploit: shares tend to outperform leading into dividend ex-dates over a period of
approximately 30 calendar days. This is referred to as the dividend run-up anomaly. This
market anomaly is robust in that it tends to work across different markets and different
sectors within markets. The 30-day window is also sufficiently long to generate decent
after-cost returns. As expected, the outperformance leading into dividend ex-dates tends to
be greatest for companies with high dividend yields.

Insider Trades
Insider Trading has two definitions: illegally trading based on material non-public
information, and trading by company “insiders” such as directors and substantial
shareholders. Not surprisingly, I only focus on the second definition.

38
Insider trades don’t Include Illegal trades using non-public information
Insider trades have strong intuitive appeal. If someone with intimate knowledge of a
company’s business is buying shares, one would think it is a good buy signal (and vice versa).
It is important to understand the motivation for the insider’s trade, especially for sell
trades which may be driven by portfolio diversification or tax considerations which are
unrelated to the company’s outlook. Insider buy trades are more likely to be driven by a
belief the company’s share price is undervalued. Some company directors, however, may
be driven by a desire to send a positive signal to the market, especially if the company’s
share price has been underperforming and it has been attracting negative publicity.
It is also important to distinguish between insider buy trades which are transacted on the
share market and trades which don’t involve any payment. Often company directors are
awarded shares as part of their compensation package if they meet pre-defined
performance hurdles. There is little or no information content in these “trades”.
Insider trades can be measured based on the absolute size of the trades or relative to the
insider’s existing holding. Large insider trades which are also large relative to the directors’
existing holding tend to exhibit the most predictive power.
Despite their intuitive appeal, the backtest results for insider trading signals are relatively
weak. This probably reflects the difficulty in systematically determining the motivation for
insider trades. Given this, it makes sense to be aware of insider trades and then manually
review their significance.

39
Quant Factor Investing Pitfalls
Building and implementing a robust quant factor investment process is fraught with
challenges. There are lots of investors trying to identify and exploit mispricing opportunities
and if you want to outperform the competition, it is important to avoid some common
pitfalls.

Data Mining
Data mining – also referred to as overfitting and data snooping – is the biggest potential
pitfall. Everyone wants strong backtest results and it’s not difficult to achieve this goal if
you test a multitude of factors, factor combinations and investment strategies.

Data mining is a potential backtesting pitfall


In the Factor Backtesting section of this book, I mentioned that factor methodologies are
typically modified to generate robust backtest results, and that this tweaking can take place
over several iterations. This also applies to all aspects of a quant investment process,
including the weights assigned to quant factors and portfolio construction constraints.
This is not necessarily a bad thing – so long as precautions are taken against overfitting.
The first safeguard is to apply a common-sense overlay. Does the factor or strategy you’re
testing have intuitive appeal as a predictor of stock returns? Ideally, you should be able to
identify the market anomaly or mispricing opportunity you’re seeking to exploit. A factor
such as the change in the number of employees dividend by working capital, or a strategy
that involves buying all stocks on Tuesdays which have strong value scores, so long as the
company name doesn’t start with ‘F’, would not satisfy this criteria. Conversely, a
turnaround strategy focused on buying price distressed stocks with strong value and short-
term sentiment scores has an identifiable mispricing opportunity: investors are reluctant to
embrace credible turnaround stocks because of the reputational risks associated with
buying price distressed stocks which are out-of-favor.

40
The next safeguard is to perform both in-sample and out-of-sample testing. If the out-of-
sample results confirm the in-sample findings, your backtest results are less likely to be the
product of a data mining exercise.
Finally, you should avoid having excessive layers of complexity. As a general rule, you
should keep it simple. Avoid having too many factors in the stock selection process and
having rules which are tailored to specific stock groups. Unless you can identify why one
variant of the process works for some stocks and not others, keep the investment process
as generic as possible.

Insufficient Breadth
To understand why this is important, we need to consider the Fundamental Law of Active
Management:
IR = IC * sqrt(Breadth)
IR = Information Ratio and IC = Information Coefficient.
This law was developed by Richard Grinold and Ronald Kahn in their seminal book Active
Portfolio Management. It states that, based on a several assumptions, risk-adjusted
performance as measured by the Information Ratio (IR) is a function of the predictive power
of your investment process as measured by the Information Coefficient (IC) and the number
of independent “bets” (Breadth).
The goal is to maximize IR. We can do this by increasing the IC and/or Breadth.
The problem with increasing IC is it’s hard. The harsh reality is most quant factors only
exhibit slight predictive power. In the Backtesting section of this book, I identified the
Average Rank IC as a key performance measure. Quant factors – even complex factors
based on sophisticated methodologies - rarely have a one month forward Average Rank IC
above 7%.
Broadly speaking, markets are efficient at exploiting readily identifiable mispricing
opportunities based on quant factor bets. We can increase the predictive power of our
investment process by combining and weighting factors in a smart way and taking
advantage of specific factor combinations. But there is a limit to what can realistically be
achieved.
An easier way to maximize IR is to increase Breadth. Consider, for example, a biased coin
with a 53% chance of tossing Heads. If you toss the coin a few times, the final result won’t
necessarily reflect the bias. However, if you toss the coin hundreds of times, the bias will
likely work as expected.

41
Power of Breadth (53% Success Rate)
100%
90%
80%
70%
Propbability

60%
50%
40%
30%
20%
10%
0%
0 50 100 150 200 250 300 350 400 450 500
Number of Coin Tosses

The probability of success increases with breadth


Given quant factors and models only slightly skey the odds in your favor – much like the
biased coin – it is clear we want to take as many independent “bets” as possible. This is best
done at a stock level. There are lots of stock investment opportunities, either within one
market or, even better, across multiple markets.
A systematic quant factor investment process that has concentrated stock positions is
unlikely to succeed. High conviction investing is better suited to other investment styles,
such as fundamental investing.

Taking Risk Bets


Not only is it important to have numerous stock positions due to the importance of
Breadth, stock bets should be spread as evenly as possible across key risk dimensions.
To illustrate: let’s assume you have a large number of positions in the portfolio but they’re
all technology companies (and your aim is to beat the overall market). It doesn’t matter
how strong your stock selection is as your performance will be driven by your long bet on
technology. If technology companies underperform the broader market, you will almost
certainly underperform (and vice versa).
You could counter that this isn’t an unwanted risk bet as you have a quant model that
predicts technology companies will outperform. Unfortunately, this argument isn’t valid –
again because of Breadth and the Fundamental Low of Active Management. Given there are
only a limited number of sectors to choose from, you would need a sector selection model
with an extremely high IC to generate strong risk adjusted performance. These models do
not exist.
The goal of any systematic quant factor portfolio construction process should be to
generate returns via a diversified stock portfolio within a tight risk-constrained
framework. If you take risk bets such as a large long bet on technology companies, they will
drive performance and override a superior source of excess returns: stock selection.

42
Potential Overcrowding
A crowded stock position is one in which lots of investors who share a common
investment thesis hold the same position. Although different quant factor funds claim to
have their own unique (and superior) investment process, there is significant commonality
in their stock selection models. Consequently, they tend to favor the same stocks.

Overcrowding can be dangerous!


Overcrowding in problematic when investors decide to exit or reduce a position at the
same time. This magnifies price swings and can result in significant losses. This can be
particularly dangerous for short positions as price increases result in increased position sizes
which can then result in even more aggressive short covering.
In the world of quant factor investing, there is no better example of the perils of
overcrowding than what occurred in August 2007. It was such a momentous event, it is
commonly referred to as the Great Quant Unwind. Significant overcrowding and then
forced liquidity unwinds resulted in unprecedented short-term losses for quant factor funds.
(For more information, please refer to the Great Quant Unwind Appendix).
A more recent example occurred in November 2020 when a sudden spike in Treasury yields
resulted in outperforming tech stocks – which were widely owned by institutional and retail
investors – suddenly underperforming.
Unfortunately, it is difficult to obtain reliable and detailed data on crowded stock
positions. Often anecdotal evidence provided by stockbrokers is the best source of
information. This lack of visibility potentially magnifies the risk.
The risk of overcrowding is greatest when a particular investment style is popular.
Currently, quant factor investing is out-of-favour and hence there is less risk of overcrowded
quant positions. Nevertheless, it is something that quants should monitor, particularly
when the market environment changes and we witness a resurgence in the popularity of
quant factor investing.

43
Appendix A: DCF Methodology
A discounted cashflow (DCF) valuation methodology compares a company’s share price with
future cashflows which are discounted back to today’s dollars using an appropriate discount
rate. The sum of these discounted cashflows is the Net Present Value (NPV). The discount
rate such that the NPV equals the current share price is the Internal Rate of Return (IRR).

Advantage of DCF Methodology


The biggest advantage of a DCF methodology is that, by considering future cashflows, it
can be used to properly value companies with strong future growth potential. To
illustrate, consider two companies: Company A, an online retailer, and Company B, a
television broadcaster. Company A has very small earnings relative to its share price but is
likely to generate strong earnings growth in the future. Company B has very strong earnings
relative to its share price but its future earnings outlook is structurally challenged. Based on
traditional valuation ratios, Company A will look much more expensive than Company B, but
it isn’t necessarily a worse investment.
By considering future earnings potential, a properly constructed DCF model can allow for
the growth bias and effectively value both low growth and high growth companies.
While this sounds good in theory, it is difficult to achieve in practice. It is important to get
the right data to feed into the DCF model and to configure the model properly.

DCF Methodology Issues


What “Cashflows” to Use?
The theoretically pure approach to valuing stocks is to discount future dividends. This
values stocks in the same way as bonds, with dividends representing the bond coupon
payments. As stocks don’t have a maturity, a terminal value is typically incorporated into
the DCF methodology.
In practice, discounting dividends doesn’t work for companies which pay out a low
proportion of their earnings as dividends (ie companies with a low payout ratio).
Discounting future dividends tends to undervalue companies with a low payout ratio and
overvalue companies with a high payout ratio. This issue has become more problematic
with many companies, particularly in the United States, preferring stock buybacks over
dividend payments.
Terminal Value
A terminal value is the net present value of all future cashflows assuming a constant
growth rate. It assumes cashflows continue indefinitely and the growth rate is constant –
which is potentially problematic. It means that the terminal value is very sensitive to the
growth rate and the discount rate. A slight change in either variable can result in huge
changes in the terminal value. And if the growth rate exceeds the discount rate, the
terminal value is infinite.

44
Estimating Growth
While a DCF methodology can be used to value low growth and high growth companies,
estimating future growth is very difficult. As with any model, garbage in, garbage out
applies. The fact DCF models are highly sensitive to growth assumptions exacerbates this
problem.
Calculating the Discount Rate
The discount rate should be the risk-free rate plus some additional return required to
compensate for the risk of investing in the company. Based on finance theory or more
specifically, the Capital Asset Pricing Model (CAPM), there is only one measure of risk that
matters: the stock’s market beta. This is calculated by regressing historical stock returns
against market returns (and is the slope of the line of best fit based on this regression
analysis). This can be done over different time periods, using different data frequencies and
using different market proxies. Herein lies one of the problems with beta: there is no
universally accepted calculation methodology. Further, CAPM is based on a host of
simplifying assumptions and the overall conclusion from this model – expected stock returns
should solely be a function of beta – isn’t supported by empirical evidence.
Further complicating matters, the NPV is highly sensitive to the discount rate. Given this,
some practitioners prefer to either use a fixed discount rate for all companies when
calculating the NPV, or calculate the IRR and use this as the valuation measure.

Sample DCF Model


The following model deals with the various DCF methodology issues in a practical way
such that it generates a meaningful valuation score.
The model utilizes annual cashflows over 3 phases.
Phase 1
Phase 1 comprises the number of years for which consensus DPS and EPS forecasts are
available. This ranges from 2 to 4 years. The model cashflows comprise the consensus DPS
forecasts.
Phase 2
Phase 2 continues out to year 7. The model cashflows comprise notional DPS payments.
These are derived by calculating EPS forecasts and applying a notional dividend payout ratio.
The EPS forecasts are determined by taking the last EPS forecast from Phase 1 and applying
a company specific EPS growth rate. This growth rate is based on the last 5 years of
historical EPS data and all available EPS forecasts. An algorithm is then used to calculate the
underlying growth rate over this time frame. The algorithm penalizes companies which
have highly volatile earnings and rewards companies which have consistently growing
earnings. A capped natural log growth methodology is used to do this.
A notional payout ratio is then applied to the EPS forecasts. The payout ratio is calculated by
taking the average payout ratio from Phase 1 and increasing it linearly each year such that it

45
is 100% by year 25. The model cashflows comprise the notional DPS payments after
applying the payout ratio to the EPS forecasts.
Phase 3
Phase 3 comprises years 8 to 25. The model does not have a terminal value.
In Phase 3, a 5% EPS growth rate is applied to all companies. This growth rate is applied to
the last EPS forecast in Phase 2. The linearly increasing payout ratio is then applied to these
forecasts to calculate the notional DPS payments. The model cashflows comprise these
payments.

Why Isn’t there a Terminal Value?


This is a good question as shares, unlike bonds, don’t have a finite life. The main problem
with using a terminal value is it greatly increases the model’s duration and makes the model
output extremely sensitive to the model parameters, particularly the growth rate and the
discount rate. If the model parameters could be calculated with precision, this wouldn’t be
a problem. Unfortunately, this isn’t the case. There are a lot of unknowns and numerous
simplifying assumptions that need to be made.
It is also assumed that by Year 25 (the last year in the model), the company is not retaining
any earnings to fund future growth as it is paying out all its earnings as dividends.

46
Appendix B: Pro-Rating Data
It is important to pro-rate financial data to deal with different company fiscal year-ends.
Company fiscal year-ends are often the same as the tax year-end in the country in which the
company is domiciled. However, this is not always the case and company fiscal year-ends
vary within countries and, even more so, across countries.

To facilitate a proper comparison of company financial data – and any quant factors based
on that data – it is important to ensure the same time-period is used. When analyzing
forecast data, quants like to look at data on a one-year forward basis. This involves
combining forecasts from different financial years.
Assuming only annual forecasts are available and:

 FY1 Date = Next Forecast Year-End (eg 31 December 2021)


 FY2 Date = Forecast Year-End after FY1 Date (eg 31 December 2022)
 FY3 Date = Forecast Year-End after FY2 Date (eg 31 December 2023)
 FY1% = Percentage of FY1 Forecast
 FY2% = Percentage of FY2 Forecast
 FY3% = Percentage of FY3 Forecast
 1YrF = One-Year Forward Forecast
If FY1 Date >= Run Date:
FY1% = (FY1 Date - Run Date) / 365
FY2% = 1 – FY1%
1YrF = FY1% * FY1 Forecast + FY2% * FY2 Forecast
Given the lag between a company’s fiscal year-end and when it reports its results, often the
Run Date is greater than the FY1 Date. When this is the case:
FY1% = 0
FY2% = (FY2 Date – Run Date) / 365
FY3% = 1 – FY2%
1YrF = FY2% * FY2 Forecast + FY3% * FY3 Forecast
For some companies, only FY1 and FY2 forecasts are available. This tends to vary by country
and also based on each company’s market capitalization (analysts tend to generate longer
term forecasts for large capitalization companies).
If FY3 data is not available, an appropriate growth rate can be applied to the FY2 forecasts
to generate notional FY3 forecasts.
Generating two year forward pro-rated forecasts always requires FY3 forecasts and, if the
FY1 Date is not greater than the Run Date, FY4 forecasts are required. The availability of FY4
forecasts is limited and hence it is often necessary to use notional forecast data.
It should also be noted that if quarterly or half-yearly forecasts are available then the pro-
rating formulas should be adjusted accordingly.
47
Appendix C: Using Data
At a minimum, quants require robust:

 Market Data: eg price and volume data


 Historical Financials Data: P&L, balance sheet and cashflow data
 Forecast Data: financial data forecasts, analyst recommendations and target prices.
There are several vendors that provide this data, the largest being Refinitiv, Bloomberg,
FactSet and S&P. The pros and cons of each vendor’s data offering is beyond the scope of
this book. I focus on some of the key criteria for evaluating data vendors and the challenges
associated with using this data for calculating quant factors.
I conclude this Appendix with a brief discussion of alternative data and its usefulness for
developing predictive quant factors.

Evaluation Criteria
Data History
To facilitate robust in-sample and out-of-sample testing, a lengthy data history is
required.
This issue is particularly pertinent for forecast data. This data is required to calculated
sentiment factors and numerous value factors. The first data vendor that collected analyst
forecast data was the Institutional Brokers' Estimate System (I/B/E/S) which is now owned
by Thomson Reuters. I/B/E/S started collecting this data in the United States in 1976 and
in major international markets in 1987.
Originally the number of data fields was limited to annual earnings estimates and long-term
earnings growth forecasts. During the first few years, the data was sketchy, both in terms of
coverage and data consistency. Over time, the data become more reliable and additional
data fields were added.
Other data vendors started to collect analyst estimates after I/B/E/S and have a shorter data
history.
Data Frequency
When discussing data frequency, it makes sense to distinguish between market data,
financials data and forecast data.
Share price and volume data is available for individual trades. This is referred to as tick
data (or tick-by-tick data). It is now also possible to source a history of best bid and best
offer data.
As is usually the case, the longest data history is for US equities. Specialist data vendors
such as tick.com also provide this data for major international equity markets from 2008.

Tick data is used to develop high frequency trading algorithms that can potentially exploit
very short-term pricing anomalies based on liquidity flows. This type of trading is outside

48
the scope of this book and quant factor investors typically only use tick data to build market
impact costs models.
For backtesting most quant factors, daily Close-of-Trade data or Volume Weighted Average
Price (VWAP) data is sufficient. A lengthy history of this data is readily available for all major
equity markets.
The frequency of company financial data reporting varies by country. The United States
has quarterly reporting while most other countries have semi-annual reporting. There was a
trend towards more frequent reporting of company financial data but, more recently, this
trend has reversed. For example, in 2007 UK public companies were required to issue
quarterly, rather than semiannual, financial reports, but this requirement was rescinded in
2014. Now, European companies are only required to report financial data every six
months, although many larger companies also provide quarterly updates in line with US
reporting standards.

Most data vendors provide historical company financial data based on reporting
frequency. Hence, it is not a point of differentiation. The timeliness of this data is a much
bigger issue. More on this later.
Granularity
Granularity is primarily an issue with forecast data. In addition to consensus data, some
vendors offer – for a large additional fee - a history of each analyst’s forecasts. The
facilitates the development of more sophisticated forecast revisions factors. To illustrate:
there is more information content in analyst revisions that break away from consensus (as
discussed in the Analyst Sentiment chapter).
Data vendors also differ in terms of the number of financial data items in their historical
data sets. Typically, they have a very large number of data fields, but they’re not always
populated. It is important, therefore, to look beyond the number of advertised fields and
analyze the amount of data that is included in each vendor’s offering.
Survivorship Bias
Survivorship bias can corrupt backtest results – particularly for momentum strategies – and
hence it is extremely important that datasets include de-listed companies. This can be
problematic when data vendors backfill historical data based on a current universe of
companies. Vendors who have been collecting data for a long period of time are unlikely to
have a survivorship bias in their data sets.
Point-in-Time Data
Point-in-time data includes the date the data was available. This is particularly relevant for
financial data given the considerable lag between a company’s period-end and when it
reports its results. Occasionally, companies also restate their financial results which makes
point-in-time data essential to avoid any lookahead bias. Only some vendors offer this data
and hence it is an important criterium for evaluating historical financial data sets. It is also
relatively expensive.

49
Timeliness
This issue relates to calculating the latest factor scores, as opposed to calculating historical
factor scores for backtesting.
Data lags can be problematic for financial data and forecast data. Live market data,
including bid and offer data, is readily available, so long as you’re willing to pay the relevant
exchange fees.
With regards to reported financial data, data vendors have large teams who are responsible
for entering data when it is released by companies. They start with the most frequently
referenced data fields (eg revenue, Net Profit, EPS) and the largest companies. More
obscure data fields and data fields for smaller companies tend to be less timely. Vendors
vary in terms of the speed with which they update their data sets and hence it is a point of
differentiation.
Analyst forecast data is submitted by brokers to the relevant data vendors. The brokers
send this information to all data vendors at the same time. The timeliness of this data,
therefore, depends on how quickly the data vendors perform their data checks, incorporate
this data into their data sets, and make it available to their users. The time lag associated
with doing this is usually very short.
Data Accuracy
As with data timeliness, this issue relates to calculating the latest factor scores. When
errors occur, vendors will typically be alerted and change their historical data to address any
inaccuracies. The large data vendors tend to have robust data checks and this issue is less
problematic than in the past.
Data Coverage
This issue relates to company financial data. Some data vendors cover all listed equities in
all markets, while others exclude small capitalization companies and some small emerging
and frontier markets. The large data vendors tend to cover all investable stocks across all
equity markets.

Data Challenges
Matching Companies
Matching companies across data sets sourced from different vendors can be surprisingly
complicated. Each vendor uses its own stock identification code. This typically comprises
the local exchange code with a country suffix, but there are exceptions to this rule. In
Singapore, where the local exchange code tends to be extremely cryptic – for example, the
local exchange code for Singapore Telecommunications is Z74 – Bloomberg uses its own,
more intuitive, code.
Complicating matters further, some vendors (eg I/B/E/S) use a proprietary stock identifier
which is totally unrelated to the local exchange code and bears no resemblance to the
company name.

50
Data vendors also use different country suffixes. Again using Singapore as an example,
Thomson Reuters uses “SI” as the country suffix while Bloomberg uses “SP”.
Another problem is local exchange codes – and hence the stock identification codes used by
data vendors – can change when a company changes its name. This doesn’t occur when the
local exchange identifier is numeric but it does happen when it is an alphabetic code that
represents the company name. As a result, it is important to have a mapping system
between the new code and the old code.
Fortunately, there are generic codes that are consistent and unique. The most common is
the International Securities Identifying Number (ISIN). Most data vendors provide this
codes along with their own stock identifiers. Stock Exchange Daily Official List codes
(SEDOL) are also widely used. However, these codes can be re-used and occasionally
change through time. Another issue with SEDOL is some data vendors use a 6 digit code
while others use a 7 digit code where the last digit is a check digit.

Consistent Share Dilutions


Following corporate actions such as stock splits and rights issues, all per share data (eg EPS)
must be diluted to facilitate meaningful time series comparisons. Unfortunately, there is
often a lag between when the dilution factor becomes effective (eg after a rights issue ex-
date) and when the per share data is updated – and this lag varies across data vendors. This
can be problematic when per share data sourced from different vendors is included in a
factor calculation. For example, using the diluted share price from one vendor and the
undiluted EPS from another vendor will result in a meaningless factor score.

Alternative Data
A lot has been written about alternative data and its usefulness for predicting stock returns.
Alternative data encompasses any non-standard data sets. The line between standard and
non-standard data is becoming blurred as more vendors offer alternative data and more
fund managers incorporate this data into their investment process. A few decades ago,
using analyst forecast data and factors would have been considered non-standard but is
now commonplace. For now, I will classify standard data as market data, company
financials data, analyst forecast data and event data such as insider trades and buybacks;
any other data which is difficult to source is classified as alternative data.
Most alternative data are sourced from the Internet (primarily via web scraping), but this
class of data also includes satellite imagery, geolocation data, shipping data and other
industry specific data.
There are numerous problems associated with using this data:

 Backtesting - Usually there isn’t a rich source of historical data which can be used to
perform robust in-sample and out-of-sample backtesting.
 Commoditization - There has been a proliferation of vendors offering this data and
much of it is becoming commoditized. This can lead to any excess returns being
arbitraged away as investment strategies using this data become more crowded.

51
This problem afflicts all quant factors and strategies but can be even more
problematic for alternative data strategies which have a very narrow focus (eg a
specific industry).
 Interpreting the Data – Alternative data can be misleading if it isn’t subjected to
some manual oversight. Consider for example satellite images of shopping center
parking lots at the start of a lockdown when only essential shops can open.
 Data Integration – A lot of alternative data is unstructured and hence difficult to
integrate into existing databases. The flip side of this argument is it introduces a
“barrier-to-entry” which potentially makes the data more useful for well resourced
teams that can transform the data into a usable format.
 Data Quality – Large and well-established data vendors have robust checks to ensure
data quality. Often these data quality checks have evolved over time in response to
problems that have been arisen. Potentially, new vendors which offer alternative
data have not yet developed the requisite systems to ensure optimal data integrity.
 Limited Scope – Some alternative data sources are only useful for a small subset of
stocks. Given the importance of breadth in any quant investment process, this is a
major limitation.
 Web Scraping Rules – Frequently, websites prohibit web scraping in their Terms of
Service. Scraping data can have a detrimental impact on server speed and there are
also potential copywrite issues.
There is a school of thought that the usefulness of alternative data has been over-hyped.
It’s not hard to reach this conclusion after reading vendors’ marketing material. Whether or
not the same conclusion applies to large quant fund managers who use factors based on this
data is an open question. It is quite possible that these funds have overstated their
importance to improve their marketing pitch. A marketing presentation with satellite
images and geolocation data is far more interesting than a presentation with valuation
models based on analyst data.

52
General Appendices
Smart Beta
Smart beta is an investment style that selects and weights stocks using a set of rules that
are different to the market capitalization approach used by most index providers. For
example, a smart beta fund might weight stocks based on their dividend yield or select
stocks based on a dividend yield hurdle and then apply an equal weighting methodology.
It sounds good in theory. Rather than weighting stocks based on their size why not focus
on factors that have more intuitive appeal and have historically been positively correlated
with stocks returns?
The problem is in the implementation. Often the stock selection, portfolio construction
and rebalancing methodologies used by smart beta funds are overly simplistic.

Stock Selection
Broadly speaking, it’s hard for investors to beat the market. There are lots of investors
competing against each other trying to generate excess returns. On an after-cost and after-
fee basis, most investors are unsuccessful in this endeavor. Given this, a basic stock
selection strategy based around a simple factor or ratio, such as those used by many smart
beta funds, is unlikely to work.
A more robust stock selection methodology that uses multiple factors that have been
combined is a sophisticated way, as outlined in this book is more likely to outperform.
Nevertheless, some investors – for portfolio diversification or other reasons - might
prioritize getting exposure to a particular factor or investment style over generating returns
via a robust multi-factor stock selection process. Even if a simple stock selection
methodology will suffice, other parts of the investment process, however, are still
important.

Portfolio Construction & Rebalancing


A simple approach to portfolio construction and rebalancing can easily undermine the
investment strategy and produce a suboptimal outcome.
To see why, let’s continue with the dividend yield example. A simple approach might weight
each stock based on its dividend yield. But what if the highest yielding stocks tend to be
small and illiquid? Buying these stocks could incur market impact costs and make it difficult
to adjust position sizes. And what if the highest yielding stocks are banks? The portfolio will
essentially represent a big bet on banks outperforming.
It is important to have appropriate risk constraints in the portfolio construction process to
ensure the factor bias rather than unwanted risk bets drive performance.

53
Factor scores also change over time and position sizes need to be adjusted to maintain the
appropriate factor exposures. A high yielding stock can quickly become a low yielding stock
if management decides to reduce the dividend payment.
This brings us to the importance of rebalancing. A simple approach to rebalancing such as
adjusting all position sizes every three months is suboptimal.
A robust rebalancing methodology will try to maximize the desired factor exposures while
minimizing transaction costs. This requires frequent small trades rather than a large set of
rebalancing trades every few months.

Equal Weighting Smart Beta Funds


Some smart beta funds equal weigh a relatively large number of positions after applying
pre-defined selection criteria. Superficially, this sounds like a good solution. It should
address any unwanted risk bets, mitigate stock specific risk and ensure a highly diversified
portfolio. Nevertheless, it is a deeply flawed strategy. It requires selling your winners and
buying your losers. Let’s say a stock falls 90%. You would have to increase your holding in
that stock severalfold to maintain its equal weight. And if it keeps falling, you have to keep
buying – hardly a sound investment strategy.
As always when it comes to investing, simple strategies like equally weighting positions
sound good until you scratch below the surface and understand all the implications.

What About Passive Index Funds?


You might now be thinking that passive index funds don’t have sophisticated stock
selection, portfolio construction and rebalancing methodologies – and as result, they must
be bad investments. This isn’t correct for one very simple reason: market cap weighting
stock positions is essentially a “buy and hold” strategy that doesn’t require a sophisticated
investment process.
Position sizes automatically adjust based on market moves. Index constituents do change
but this happens infrequently. And the biggest positions are in the largest and most liquid
names which are typically cheap to trade with no material market impact costs.
Passive funds based on a market cap weighted index are a sound investment choice.
You can generate superior returns with a factor-based investment strategy, but it needs to
be robust – in terms of stock selection, portfolio construction, and portfolio rebalancing.
Simplistic smart beta funds which are deficient in all three areas are poor investments.

54
Day Trading
Day trading is a type of trading involving intra-day strategies, typically based on share
price and volume data.
It is well known that the vast majority of day traders lose money.

Most day traders lose money


Numerous reasons are put forward for this, including insufficient knowledge, lack of
discipline, poor risk controls, not being able to cope with pressure, selling winners too
quickly and having a gamblers mentality.
While these may be contributing factors, the biggest problems are competition, transaction
costs and data mining.

Competition
Short term trading is highly competitive. There are lots of players in this space and, after
taking transaction costs into account, it is worse than a zero-sum game. Put simply, there
are more losers than winners. There also tends to be a few big winners – meaning there are
a disproportionately large number of losers.
The winners are likely to be traders who use sophisticated strategies, such as high
frequency traders. They have computers and systems that can sift through large volumes of
data in a fraction of a second and automatically execute trades.
Day traders using charting software on personal computers are at the other end of the
sophistication spectrum. Charting or technical analysis doesn’t have a solid theoretical
foundation. The weak form of the Efficient Market Hypothesis states that past price and
volume data can't be used to predict future price moves. It is also very difficult to come up
with an intuitive and sensible reason why patterns on share price charts should exhibit any
predictive power.
55
Investing based on patterns on share price charts doesn’t work
Even if you think you’ve identified a pattern which makes sense as a predictor of stock
returns it is likely that other investors have made the same discovery. Price data is readily
available and hence any profits based on a simple strategy using this data is likely to be
arbitraged away very quickly.

Transaction Costs
To state the obvious, the more frequently you trade the more likely transaction costs will
impinge on performance. This depends somewhat on the instruments you trade – eg shares
versus currencies – but the general principle still holds: the more you trade, the more costs
you incur.
For day traders, the cost of having to cross the bid and offer spread to immediately execute
trades can significantly erode after-cost returns. It is very difficult to model these costs and,
as a result, a strategy that generates profits in backtests may not achieve the same
outcome in real life.

Data Mining
One of the safeguards against data mining is to identify a sound reason why the pricing
anomaly you’re seeking to exploit exists and is likely to persist.
Data mining can be particularly problematic for day trading strategies based on short term
return and volume data. It is easier to identify behavioural biases using a richer data set. If
you identify a day trading strategy that works in backtests, even after including conservative
costs assumptions, the excess returns may only exist in the data sample and may prove to
be illusory in real life.

Be Careful!
The lure of day trading is obvious. Lots of people want to be their own boss and make lots
of money while working from home. The probability of succeeding, however, is very small.

56
Short Selling
Short selling involves selling shares in a company you don’t own.
How do you do this? First, you need to borrow the shares you want to sell from an existing
shareholder. For this privilege, you need to pay the lender a per annum fee. This varies
depending on how much short selling “inventory” is available. For large and liquid
companies, there is usually a large short sell inventory and the stock borrow fees are low.
Once you have borrowed shares, you can short sell them on the market. To complete the
process, you need to buy the shares back and return the borrowed shares to the lender.
Short selling used to only be available to sophisticated investors. This has changed with the
advent of internet trading platforms and now short selling is more widely available.

Short Selling is Risky


Short selling is dangerous for two related reasons: your losses are potentially unlimited
and the size of your position increases as you accrue losses.
To illustrate: let’s assume you short sell 100 shares in Company A at $10. Immediately after
you execute this trade, your position size is $1,000. If the share price rises to $20, your
position size increases to $2,000 and you have accrued a loss of $1,000. Your paper loss is
now equivalent to your initial outlay and your position size has doubled. And as there is no
upper bound to share prices, your potential losses are unlimited.
Brokers require investors to have sufficient funds in their account to absorb potential losses.
Once excess funds fall below a pre-defined threshold, the broker will close out positions to
maintain the minimum level of collateral. In the example above, if you fund your account
with $1,000 and this is your only trade, you will lose all your money as the broker closes out
the position.
As timing trades correctly is very difficult, you need to be particularly careful when you
execute short sell trades that you have sufficient collateral to absorb potential losses. It is
obviously frustrating if you execute a short sell trade and the share price falls – but only
after it has risen sufficiently for your position to have been closed out. Unless you’re careful
then - to quote Maynard Keynes - “markets can stay irrational longer than you can stay
solvent”. GameStop investors found this out the hard way in early 2021.
Even if you have sufficient collateral, it may still be prudent to reduce your position size as
you start incurring losses. From a portfolio management perspective, position sizes should
be kept in check and should not be allowed to increase beyond your risk tolerance level.
Another disadvantage of short selling is gains are limited to the size of the initial
investment. This best-case scenario occurs if the company’s share price drops to $0 (ie the
company goes bankrupt).

57
Beware of Short Squeezes
A short squeeze represents forced covering (buying) of a short position due to:

 A stock having a large number of shares shorted relative to the total number of
shares available to trade,
 An event which causes the share price to rise.
The catalyst for a short squeeze is typically a favorable event such as an earnings surprise
which causes the share price to rise. This then forces short sellers to buy the stock to
mitigate their risk exposures. When there are a large number of short sellers (ie the level of
short interest is high), this can result in further share price appreciation and further buying
by short sellers, resulting in a short squeeze.
In rare circumstances, activist investors target a stock where there is a high level of short
interest and push up the share price with the express intention of causing a short squeeze.
The best example of an activist driven short squeeze occurred in January 2021 when a large
number of retail investors targeted GameStop, a stock which at the time had an extremely
high level of short interest. This caused the share price to spike and forced numerous short
sellers to cover their short positions.

GameStop is the best-known short squeeze


Like all liquidity driven events, short squeezes are typically short-lived. The rapid share
price appreciation is often followed by a sharp share price fall, particularly in the case of an
activist driven short squeeze.

Short Selling Can be Appropriate in Some Circumstances


None of this makes short selling sound very attractive. Nevertheless, subject to having the
appropriate risk constraints in place, short selling is appropriate when you:

 Have a high conviction level – this particularly applies to specialist short selling
funds which conduct in-depth research to identify companies which are

58
misrepresenting their financial data and/or are being incorrectly valued by the
market.
 Need to hedge long stock positions – this applies to market neutral funds that aim
to generate returns that are independent of market moves.

Short Sellers are Frequently (and Unfairly) Maligned


Many investors believe that short sellers engage in share price manipulation by targeting
vulnerable companies and pushing their share prices down, thereby enabling them to profit
from other people’s misery.
Short selling is so unpopular in Korea that in 2021 a group of activist retail investors paid for
a bus to be painted with cartoons of people holding up signs stating “I hate short-selling”
and “short-selling should be abolished”.

Short selling slogans painted on a bus in Korea


Contrary to popular belief, share price manipulation and other nefarious trading activities
are far more prevalent amongst long-only investors. It is much easier and less risky to
artificially inflate rather than deflate share prices. This is why Pump and Dump schemes are
far more common than any nefarious share trading by short sellers.
It is also commonly believed that banning short selling would boost share prices. This
misconception ignores the fact that short sellers hedge their portfolios. Put simply, this
means that for any short position they typically have a long position. They aim to short
overvalued stocks and go long undervalued stocks. In doing this, short sellers play an
important price discovery role.

Short sellers also have a strong track record of uncovering accounting fraud. Prominent
examples include Enron, Wirecard and Luckin Coffee.
Short Sellers Have Uncovered Numerous Frauds

59
Short sellers have helped uncover numerous frauds, including Enron
Further, short sellers provide a useful source of income for passive long-only funds who
can lend their stock to short sellers for a fee. This exerts downward pressure on the fees
passive fund managers charge.
Investors who want to ban short selling should be careful what they wish for. An
investment world without short selling would include more volatile share prices, more
accounting fraud and less income for passive share investors.

Put Options may be a Better Choice


For retail investors who want to bet on declining share prices, put options may be a better
choice. Put options provide investors with the right, but not the obligation, to sell a security
for a predefined price over a predefined time-period. One of the key advantages of buying
put options is the maximum loss is the size of the initial outlay. This is far more comforting
than the prospect of unlimited losses.

60
Listed Closed-End Funds
Listed closed-end funds are managed funds which are listed on a stock exchange and can
be bought and sold like any other security. The number of shares in the fund is fixed and
the share price varies depending on the normal market forces of supply and demand.
Most managed funds have an open-end structure. This means the number of shares (or
units) can vary and they are priced based on their net asset value.
The number of listed closed-end funds varies by country. There are numerous closed-end
funds in the United States. Australia is another country where closed-end funds – referred
to as listed investment companies – are prolific.
The key difference between closed-end and open-end funds is that the share price of the
former can deviate significantly from net asset value. Often shares in listed closed-end
funds trade at a discount to their net asset value, meaning that investors can buy a stake in
a portfolio at less that its true worth. Although this sounds attractive, some closed-end
funds persistently trade at a substantial discount to net asset value. Like any other listed
security, sellers can only sell at a price that buyers are willing to pay.

Evaluating Closed-End Funds


Like open-end funds, closed-end funds should be evaluated based on:

 Fees – High fees can significantly impact fund performance. Investors should be
particularly wary of performance fees which are charged relative to an inappropriate
benchmark (for example, a fund which has a long equity market exposure but does
not measure performance fees relative to an equity market benchmark).
 Fund Manager’s Skill – This is typically measured based on past performance. This
analysis is easier for funds which have a long performance track record that covers
different market cycles. For funds with short-term track records it can be difficult to
distinguish between luck and skill when reviewing performance data. Other issues
like portfolio concentration can impact fund performance. Given this, it is also
advisable to assess the experience, integrity, and capabilities of the portfolio
management team.
 Fund’s Investment Style and Objectives – Funds vary widely in terms of the
securities they invest in, portfolio concentration, and their investment methodology.
Investors should obviously choose a fund that is compatible with their investment
objectives and risk tolerance.
 Size – This is as issue for very large funds and funds which invest in relatively illiquid
securities. Funds that need to transact in large volumes relative to overall market
volumes will incur adverse market impact costs which will hamper their ability to
generate strong returns.

61
In addition to these criteria, closed-end funds should be evaluated based on:

 Discount/Premium to Net Asset Value – This should be evaluated in absolute terms


and relative to the historical discount/premium. For a host of reasons, some close-
end funds persistently trade at a large discount – or more rarely a premium – to net
asset value and hence it is important to review the current discount or premium
relative to historical levels.
 Closed-End Fund Liquidity – The importance of liquidity depends on the number of
shares that you are likely to buy and the potential need to exit the position quickly.
It is also important to note that stock liquidity also impacts a fund’s discount or
premium to net asset value. Small and illiquid funds are more likely to trade at a
relatively large discount to net asset value.
 Dividend Policy – Closed-end funds can retain earnings which can then be used to
pay dividends to shareholders. Some funds place great emphasis on paying a steady
stream of dividends to shareholders while others focus more on capital growth.
 Buyback Policy – If a fund is trading at a discount to its net asset value, buybacks are
potentially an effective mechanism to reduce the discount. These can be on-market
or (more rarely) off-market. Some fund managers are more willing to buy back
shares than others. This can be gauged based on the fund’s buyback history and the
fund manager’s policy on buybacks.

Advantages of Closed-End Funds


Listed closed-end funds are a niche part of the funds management industry and in some
countries they don’t even exist. They do, however, have advantages:

 Stable Assets – a closed-end fund manager does not need to adjust the size of the
portfolio to deal with inflows and outflows. This makes portfolio management
easier. It also doesn’t expose investors to the potential risk posed by substantial
outflows from actively managed funds. When this occurs, the manager may be
forced to sell the most liquid securities, leaving remaining investors with a less liquid
and potentially less desirable portfolio.
 Dividends – closed-end funds have the ability to retain earnings, like a company, and
pay regular and consistent dividends to investors.
 Buying a Portfolio at a Discount – to understand why this is advantageous, consider
the following example. Let’s assume you have $10,000 and you buy a portfolio of
stocks that has a dividend yield of 4%. If you can buy that portfolio for $8,000, the
yield jumps to 5%. This is effectively what happens when you buy a closed-end fund
that trades at a 20% discount to its net asset value. Even if the discount stays at
20%, you still benefit from a higher yield.
Listed closed-end funds are often attractive investments during severe market downturns.
The discount to net asset value tends to increase when markets sell-off as investors look to
reduce their market exposure. The twin advantages of buying the market when it is already
cheap and getting a further markdown as a result of buying a portfolio of assets at a large
discount often results in a great investment opportunity.

62
Fund Manager Fees
Fund managers deserve to be compensated for their efforts and their skill level.
Management fees represent compensation for the former and performance fees represent
compensation for the later.
Of the two types of fees, performance fees are the most controversial and potentially the
most lucrative for fund managers. They have turned many fund managers, particularly
hedge fund managers, into billionaires.

Luck versus Skill


It is reasonable to pay performance fees for skill. There are a large number of investors
trying to outperform their peers and their respective benchmarks. It is not easy to do this,
especially over the medium to long term.
In the short term, it is a totally different proposition. There is a lot of luck in investing. Any
investment process that has reasonable capacity that is not targeting pure arbitrage
opportunities, can underperform over the short term. And the corollary is true, a poor
investment process can outperform over the short term.
This begs the question: how long is the short-term? It is difficult to provide a precise answer
to this question. Suffice to say it is well over a year. Market dislocations and unusual
liquidity flows that drive share prices away from fundamental values can persist for at least
that long.
The time required to distinguish between luck and skill also depends on how concentrated
the manager’s portfolio is. If a manager takes concentrated bets then a small number of
bets, or perhaps one large bet, can drive significant outperformance. Witness the massive
gains made by several hedge funds which bet against subprime mortgages in 2007 and
2008. It takes a while – potentially a few years - to gather enough performance data to
determine whether these gains can be attributed to luck or skill. It takes less time to make
this determination for a portfolio manager who has a diversified portfolio and is effectively
making a large number of investment calls all the time.
Regardless of investment style, it is not acceptable for fund managers to charge
performance fees for periods as short as 6 months. The longer the performance fee
period, the better, particularly for portfolio managers with concentrated portfolios.
The best arrangement is to have a mechanism to refund performance fees to investors
following a period of poor performance. This addresses the asymmetric pay-off structure
inherent in performance fees which can encourage undue risk raking. Unfortunately,
performance fee givebacks are difficult to structure and this type of arrangement is rare in
the funds management industry.

63
Performance Fee Hurdle
The first – and most obvious - point to make about performance fee structures is they
should include a High-Water Mark (HWM). This means the fund only earns performance
fees on excess returns since the last performance fee was paid. For a fund to charge
performance fees without a HWM would be unconscionable. Some fund managers like to
highlight the fact they have a HWM. It’s not something to boast about – it should be taken
as a given.
The next point to consider is the performance fee hurdle. Performance fees should only be
paid on returns above a hurdle which is based on an appropriate benchmark.
If the investment manager’s stock universe comprises large cap stocks listed in the United
States, an appropriate benchmark may be the S&P 500 index. If the manager’s stock
universe comprises small cap stocks, an appropriate benchmark may be the Russell 2000
Total Return index. Any market benchmark should include the return generated from re-
investing dividends.
Market neutral funds that aim to generate returns independently of market moves should
have a benchmark which reflects the risk-free cash return. In many countries this is very
close to 0% and it is acceptable to use 0% as the appropriate benchmark. In other words, it
is acceptable to charge a performance fee on all profits generated by market neutral funds,
subject to a HWM.
A fund should not charge a performance fee with no performance hurdle when the fund
typically has significant market exposure. Getting market exposure is cheap. Investors can
buy index ETFs which charge very low fees, as low as 0.03%.
Too often fund managers charge performance fees on all gains when a significant
proportion of those gains come from market exposure. In many cases, these mangers can
perform poorly but, so long as the market rises, they’re able to generate significant
performance fees.
This is known as paying performance fees for beta rather than alpha. Beta refers to
market exposure and alpha refers to excess returns that reflect the manager’s skill.
Sophisticated investors will only pay performance fees for alpha. You should too.

64
Financial Markets and Interest Rates
Financial markets have changed a lot over the last 30 years. With the notable exception of
Japan, major equity markets have generally performed strongly. Most asset prices have
risen, including real estate. House prices have surged in most countries, most notably
Australia, Canada, Sweden, United Kingdom and China.

Declining Interest Rates


The biggest change over the last 30 years has been the decline in interest rates and bond
yields. Inflation has also declined over this time-period, but not to the same extent as
interest rates, resulting in a marked decline in real interest rates and yields.
In 1990, US 10-year bond yields were around 8% having already fallen from a peak of almost
16% in the early 1980s. In 2020, US 10-year bond yield fell well below 1%. The decline in
bond yields in Germany has been even more stark. In 1990, German 10-year bond yields
were above 9%. 30 years later, they were firmly entrenched in negative territory.
Until recently, negative bond yields, particularly for long duration bonds, would have been
considered impossible. Japan was the first country to pioneer ultra-loose monetary policies
that resulted in negative bond yields and other countries have subsequently followed its
lead.

Low Inflation
Low inflation has been cited as the main reason for the decline in interest rates and bond
yields. Central banks have become increasingly obsessed with inflation targeting. It started
with the Reserve Bank of New Zealand in 1990 due to concerns about high inflation. Other
central banks followed New Zealand’s lead and they generally target an inflation level of
around 2% pa.
It is interesting to note that inflation is largely a 20 th century phenomenon. There have been
periods of inflation throughout history, typically following wars, but they have been
followed by periods of deflation. The global economy survived for centuries without
inflation but central bankers now believe that a steady rise in prices is a necessary
prerequisite for a healthy economy.
Modern economic theory maintains that changes in consumer prices are inversely
correlated with interest rates. Raising interest rates will suppress economic activity, cause
people to save more, increase the exchange rate and, as a result, dampen price rises. It is
presumed that lowering interest rates will have the opposite effect – and this is true, up to a
point.

65
Reversal Rate
When interest rates fall below a certain level – known as the reversal rate – lowering
interest rates further can be counterproductive. There are several reasons for this.
1. Pensioners, retirees and people saving for their retirement are required to set
aside more money to fund their retirement. With an aging population in many
developed countries, this is a growing demographic group.
2. Ultra-low rates have a negative impact on the entire financial system. As most
customers, particularly retail customers, will not accept rates below 0%, they
squeeze banks’ interest rate margins (the difference between the rate banks charge
lenders and the rate they pay borrowers). Pension funds and insurance companies
are also adversely affected as they find it more difficult to generate the returns
required to fund future liabilities.
3. Ultra-low rates perpetuate the existence of so called “zombie companies”. These
are companies that would not survive without extremely cheap bank debt.
Potentially they crowd out more efficient companies and dampen economic growth.

Ultra-low interest rates perpetuate zombie companies

4. Cutting interest rates to all-time lows has an adverse signaling effect. It indicates
that central banks are worried about the economic outlook and this can dampen
consumer and business sentiment.
The reversal rate level and even the concept of the reversal rate is somewhat controversial.
Central bankers and many commentators continue to maintain that, in the long-term,
interest rates are the primary driver of inflation, no matter how low interest rates go. The
Economist has been particularly vocal in its support for aggressive monetary policy
initiatives such as negative rates.

66
Asset Price Bubbles
What is certain is the effect interest rates have had on asset prices. In the search for higher
yields, investors have bid up asset prices to extreme levels. This has increased the gap
between the rich and the poor to the highest level in decades.
This is problematic because when asset price bubbles burst, the economic consequences are
severe. Several major recessions have directly resulted from the bursting of asset price
bubbles.
When the history books are written, ultra-low rates and negative rates will probably be
viewed as a failed experiment. There will be people who maintain they were necessary
and, despite all the negative side effects, they had a net positive impact – but they will
probably be in the minority.

67
Market Impact Costs and Fund Capacity
Market Impact Costs
Potentially the biggest driver of transaction costs is the least understood: market impact
costs.
The act of buying or selling a security moves the price. If the trade is small relative to the
number of shares that are traded on the market, the price impact will be immaterial.
However, as the number of shares transacted increases relative to the number of shares
traded on the market, the price impact becomes more significant.
Let’s consider an example. You want to buy 1,000 shares in a particular company today. If
the company’s shares are very liquid and typically trade more than 1,000,000 shares a day,
you can buy the shares straightaway without having any material impact on the share price.
If the company typically trades around 100,000 shares a day, you can also buy without
having any material impact on the share price, but you may have to spread the trade over a
few hours, rather than executing in one lot. If the company typically trades around 10,000
shares a day, it starts becoming more problematic. You should divide the trade into smaller
lots but even if you trade gradually over the day, you will exert upward pressure on the
share price.

Implementation Shortfall
Whenever there is a lag between when a decision is made to buy or sell and when the trade
is executed, there is a risk of implementation shortfall. Implementation Shortfall is the
difference between the price when the trading decision is made and the execution price.
If your trade is large relative to daily trading volumes, you face a dilemma: do you trade
immediately and incur market impact costs or do you trade less aggressively and risk
incurring costs as a result of implementation shortfall?
The answer to this question depends on how much and how quickly you think the share
price will move. If you’re worried that the share price will get away from you, you err on the
side of incurring market impact costs. If you’re not worried about short term adverse price
movements, you should trade less aggressively and minimize market impact costs.
Potentially you can trade over serval days or weeks to minimize your “market footprint” and
execute the trade in the most optimal way.

Modelling Market Impact Costs


Modelling market impact costs is not straightforward. It is notoriously difficult to model
these costs, even with very detailed backtest data, as it is hard to predict how other
investors will react to trading flows. Potentially this has become even more problematic in
recent years with high frequency traders using sophisticated algorithms to effectively front-
run trades.
Market impact cost models are complex. Model inputs include volume curves and bid-ask
spread data. Various approaches can be used to perform the modelling, all of which are
beyond the scope of this book.

68
Fund Capacity
Given the potential costs and the difficulties in measuring the costs, one thing should be
abundantly clear: market impact and implementation shortfall costs are best avoided. This
is where fund capacity comes into play.
Fund capacity is the maximum amount of money that can be invested while still satisfying
the fund’s return objectives. Fund managers tend to downplay the importance of fund
capacity, primarily because they are motivated to manage as much money as possible to
maximize fees. Many managers also do not fully understand the significance of market
impact costs and hence may legitimately believe they can manage more money than their
capacity limit.
So, what determines fund capacity? There are several factors:

 Investable universe – The more liquid the investment universe, the higher the
capacity. For example: a fund which invests in and is benchmarked against the S&P
500 Index will have higher capacity than a fund which invests in and is benchmarked
against the Russell 2000 Index. The number of securities in the investable universe
also impacts capacity. A fund with a global mandate will typically have higher
capacity than a fund which only invests in one country, and a fund which invests
across the entire market will have higher capacity than a fund which specializes in
one sector.
 Fund Turnover – All other things being equal, investment strategies that require high
turnover will incur higher transaction costs, including market impact costs,
compared to low turnover strategies – and hence their capacity will be lower. High
frequency trading funds, for example, will have much lower capacity than funds
which adopt a “buy and hold” investment strategy.
 Market Turnover – Annual market turnover has increased significantly since the mid-
1970s. However, the increase in turnover hasn’t been consistent from year to year.
Most notably, market liquidity declined precipitously in late 2008. At a time when
many investors wanted to sell shares, there was a notable absence of buyers.
Capacity calculations, therefore, should not be based solely on the current level of
market liquidity but what could occur during periods of market stress.
 Tracking Error – Some funds generate returns which closely resemble their
benchmark, while others generate returns which vary considerably from their
benchmark. The difference between fund returns and benchmark returns is referred
to as tracking error. Funds with a high tracking error, such as high conviction funds
which take concentrated bets in a relatively small number of positions, will typically
have lower capacity than low tracking error funds. Funds with zero tracking error –
ie passive index funds – typically have extremely high capacity (and if the index
comprises extremely large and liquid securities, such as the S&P 500 Index, capacity
is almost unlimited).

69
 Redemption Frequency – Funds which require advance notice of redemption
requests can sell down positions in an orderly way and, as a result, maintain larger
position sizes. Conversely, funds which accept daily redemptions must be able to
reduce position sizes without any notice and hence should be more circumspect with
regard to position sizing and capacity.

Capacity is Not Precise or Binary


It is important to note that fund capacity is not precise. It is based on imprecise modelling
and various turnover assumptions, both for the fund and the overall market.
Nor is fund capacity a binary concept. This would imply that fund managers can invest up to
a pre-defined threshold without any problems and then beyond this threshold the ability to
outperform is severely constrained. This is not the case. Adverse market impact costs tend
to impinge on fund performance well before the stated capacity limit.

70
Stop-Loss Orders
A stop-loss is an order to unwind a position when the price reaches a specified level. For
example, if you buy a security at $100, you may concurrently place a stop-loss sell order at
$95.

Crude Form of Risk Control


Unwinding a position simply because it has moved against you isn’t very sophisticated or
nuanced. Ideally, investors should seek to understand why a trade is losing money.
If the original investment rationale is still valid, unwinding the trade may be
counterproductive. For example, if you buy shares in a company because you think it is
cheap and the share price declines without any change in the company’s fundamentals, it
may actually be a good opportunity to buy more shares. Cheap companies often get
cheaper before the market recognizes the company’s inherent value and the share price
starts to outperform.
Stop-loss orders can be a great way to lock in losses at an inappropriate time, ensuring
you capture the downside without benefiting from any subsequent gains.

Gap Risk
Unless you have a specific arrangement with your broker, stop-loss trades don’t
necessarily limit the size of your loss. Using our earlier example, if the stop-loss trigger is
set at $95 but the share price gaps down from $97 to $80 overnight or intra-day following a
trading announcement, your loss will be 20% rather than 5%.
Alternatively, you could buy a put option with a strike price of $95, guaranteeing you can
sell at this price. Put options, however, are only available for some companies, are typically
expensive to trade, and only offer protection for a pre-defined time-period.

Can be Appropriate in Some Circumstances


Despite their limitations, stop-loss orders can be appropriate for some trades, namely:

 Large trades relative to the size of your investment capital given these trades can
potentially wipe you out if left unchecked.
 Short sell trades given potential losses are unlimited and the size of the position
increases as you start accruing losses.
 Momentum trades that are executed purely because the price is moving in a certain
direction.
Stop-loss orders can also be beneficial for investors who have a limited risk budget, don’t
trust themselves to close out a trade at a loss, and are not able to closely monitor trades.
However, if any of these criteria apply to you then you probably shouldn’t be trading in the
first place. For most trades and most investors, stop-loss orders are best avoided. Investors
should seek to understand why trades aren’t working and act in a more appropriate way as
the circumstances dictate.

71
Efficient Market Hypothesis
The Efficient Market Hypothesis (EMH) states that share prices are efficiently priced in
that they reflect all available information. As a result, it is not possible for investors to
consistently generate returns in excess of the overall market on a risk-adjusted basis.
EMH comes in three forms:
1. Weak – Share prices reflect all historical price data.
2. Semi-Strong – Share prices reflect all historical price data and all publicly available
information.
3. Strong – Share prices reflect all historical price data, as well as public and non-public
information.
We can ignore the strong form of the EMH. If you’re privy to information such as an
upcoming earnings surprise or corporate action that isn’t available to the general public, it’s
safe to assume you can outperform the market. The strong form of the EMH has never
been raised as a defense in an insider trading case.
The concept of share prices reflecting all publicly available information sounds more
plausible. If there are a large number of investors sifting through all of a company’s data,
the company’s share price should encapsulate all of that data. Similarly, any change in the
data should be immediately reflected in the company’s share price. There are numerous
examples, however, where this is not the case.

Dual-Listed Company Pricing Anomalies


There are persistent and material pricing anomalies among companies listed on more than
one stock exchange (dual-listed companies). For example, among the numerous dual listed
companies in China and Hong Kong, there are many cases where the Chinese listed security
is worth several times the Hong Kong security. The price ratio for the Chinese and Hong
Kong listed securities also tends to vary considerably over time. In both markets, the
securities are highly liquid and, in most cases, covered by research analysts.
Capital controls is put forward as an explanation for this anomaly. In China, it is difficult for
investors to move money out of the country. However, the introduction of a stock
exchange link between China and Hong Kong in 2014 (known as Stock Connect) has
facilitated trading in securities across both markets. Yet the pricing anomaly has not only
persisted but become even more pronounced. Regardless, why should some dual listed
securities in China and Hong Kong trade close to parity while others trade at a substantial
discount/premium?
Consider also two companies with dual listings in Australia and London: BHP Billiton and Rio
Tinto. Both companies’ securities are highly liquid and followed by a large number of
investors and research analysts. Yet there have been prolonged periods when the London
listed security has traded at more than a 20% discount to the Australian listed security.
Based on EMH, the price of both securities should encapsulate all available data and trade

72
at their true market value, precluding the large discount/premium that has persisted over
decades.

What About Other Pricing Anomalies?


There are numerous quant factors that are positively correlated with stock returns. To beat
the market, however, these factors need to exhibit predictive power after taking costs into
account. Portfolio simulations that incorporate conservative cost assumptions indicate this
is the case.
The next challenge is determining whether the excess returns still exist on a risk-adjusted
basis. This is where a lot of intellectual bandwidth has been wasted. Academics have spent
too much time arguing whether factors work because they favor more risky stocks.

Why is so Much Emphasis Placed on Risk?


Academics place a lot of emphasis on risk because, according to finance theory, risk drives
stock returns.
Academia divides risk into market risk (systematic risk) and stock-specific risk (non-
systematic risk). Only systematic risk is considered important as non-systematic risk can be
effectively eliminated by holding a diversified portfolio of stocks. Beta is used to measure
systematic risk and it is calculated by regressing stock returns against market returns. The
share price of a stock with a beta of 1 tends to move in line with the overall market, while a
beta greater than 1 indicates the sock tends to move by more than the overall market (and
vice versa). Based on one of the original bedrocks of finance theory - the Capital Asset
Pricing Model (CAPM) - the expected return for a stock is dependent on its stock beta.
If a factor works because it favors high beta companies, academics can argue that the
EMH is valid. However, numerous mispricing anomalies based on quant factors do not
have a beta bias, thereby casting doubt on this assertion.
Further, it can be argued that CAPM is a theoretical construct that is based on a host of
unrealistic assumptions and that, as a result, it has little relevance outside of academia. If,
like many practitioners, you agree with this point of view, the whole debate on efficient
markets and risk seems rather pointless.

Micro versus Macro Efficiency


Warren Buffet is the most famous investor. He is known not only for his stellar performance
track record but also his down-to-earth, some call it folksy, approach to communicating. He
is able to express in layman terms his views on investing and one of his legacies will
undoubtedly be his quotes on financial markets.
Perhaps Buffet’s best-known quote is: “Be fearful when others are greedy, and greedy when
others are fearful.” This implies that the share market tends to overshoot on both the
upside and the downside.

73
The concept of the market being micro-efficient and macro-inefficient, first proposed by
the famous economist Paul Samuelson in 1998, has started to gain more traction in the
academic community. Even academics admit that the overall share market isn’t always
efficiently priced and that market bubbles have occurred through history. It’s hard not to
concede, for example, that the technology boom in the late 1990s did not involve some
irrational exuberance.
This is a nice way to conclude this Appendix. At least everyone – or at least nearly everyone
- can agree that the overall market is not always efficiently priced.

74
Value vs Growth Investing
Value factors have a lot of intuitive appeal. If two companies have the same share price but
one company generates stronger profits and pays higher dividends, prima facie this
company looks more attractive as an investment opportunity.

Value Performance
Long-term empirical analysis – going back well over 100 years - supports the efficacy of
value investing. Although this analysis indicates value factors are prone to periods of poor
performance, the long-term performance track record of value factors is strong.
For many years, the outperformance of value factors was put forward as evidence of
inefficient markets, although academics argued over the extent to which the
outperformance was driven by a risk bias.
Putting the risk issue aside, there is potentially a good reason why value companies
outperform growth companies. It is hypothesized that investors ascribed too much growth
potential to expensive companies and when these companies fail to live up to
expectations, they underperform. Conversely, there is less scope for cheap companies –
which have lower growth forecasts – to disappoint investors.
The technology boom in the late 1990s challenged this hypothesis. It seemed like the new
economy had given rise to companies with structural advantages that were unlikely to
disappoint investors. The bursting of the technology bubble in 2000 proved this was not the
case with many of these companies delivering deeply disappointing outcomes.
We then witnessed several years of strong value outperformance from 2001 to 2007.
More recently, however, growth has dominated value and value factors have experienced
their biggest ever drawdown. This has empowered critics of value investing. The same
arguments made during the late 1990s are resurfacing, but with even more gusto.

Good Company, Bad Stock?


Value investors don’t necessarily disagree that many growth companies enjoy significant
structural advantages, although they may not embrace with the same fervor the growth
potential of companies that leverage off new technologies. However, value investors
believe in the concept of a “good company, bad stock”. In other words, they believe that
even a good company can reach a valuation level where it is a bad investment.

What Valuation Level?


The issue then becomes what valuation level is appropriate. The most robust valuation
methodologies forecast cashflows into the future and then discount those cashflows back to
today’s dollars using an appropriate discount rate.
Forecasting cashflows into the future involves a lot of conjecture. Even deep value
investors, however, will concede that many high growth companies, particularly technology

75
companies, can benefit from significant barriers to entry and economies-of-scale – and
generate supernormal profits.

Low Interest Rates = Infinite Valuations?


This is where the analysis is distorted by low interest rates. If you assume a company can
grow its earnings and cashflows indefinitely at a rate higher than the discount rate, you
end up with an infinite valuation. The discount rate is largely driven by interest rates and
bond yields and in a world where even 30 German bonds have negative interest rates, it is
not hard to envisage this scenario. This plays into the hands of growth investors who argue
that for good companies, it doesn’t matter what valuation measure you use because the
true valuation – based on a discounted cashflow methodology – is infinite.

What Does the Future Hold?


In financial markets, history has a habit of repeating. It is always dangerous to say that this
time it’s different.
The rationale underpinning the outperformance of value companies is still sound. Indeed,
at the moment, given the incredibly strong earnings growth ascribed to many companies,
the risk of growth companies disappointing the market is probably higher than usual.
No matter how you slice and dice the data, the valuation spread between cheap and
expensive stocks is also at an all-time high. In a relative sense, cheap stocks are very cheap
and expensive stocks are very expensive. An elevated valuation spread is warranted given
the low interest rate environment but the current valuation difference between cheap and
expensive stocks is higher than the fundamentals justify.
Value companies will outperform growth companies again. It’s a question of when. And if
the interest rate environment normalizes, the outperformance could be extremely strong,
much like we witnessed during the early years of this century.

76
Risk, Beta and Volatility
If you ask an academic and a fund manager what constitutes risk, you will likely get two
different answers. Academics focus on what they refer to as systematic risk. This is just a
fancy label for overall market risk that cannot be diversified away by holding a large number
of stocks. This is calculated by regressing historical stock returns against market returns and
is referred to as the company’s market beta (or just beta). A stock with a beta greater than
one tends to move by more than the overall market and vice versa.
Investors focus more on return volatility. The more volatile the share price, the riskier it is.
Both beta and volatility are correlated. However, they are different measures of risk that
have their own pros and cons.
Neither is perfect which is why it makes sese to look at both – as well as other factors
which can provide additional insights into a company’s risk profile.

Beta
Calculating a company’s beta involves the following steps:
1. Source market total return data
2. Source company total return data
3. Determine the slope of the line of best fit using a least squares regression.
The final step is straightforward.
The first two steps are not complex but can be performed in different ways.
First, what market proxy should you use for the market return data? Ideally it should be
an index which comprises a large number of constituent companies and is calculated on a
total return basis (ie it includes dividends). In the United States, for example, a good market
proxy is the S&P 500 Index. A concentrated price weighted index such as the Dow Jones
Industrial Average is not a good market proxy.
Second, how long should the lookback period be for calculating returns? A long data
history provides more data points to perform the least squares regression and will mitigate
the impact of extreme recent return data which may not be representative of the
company’s actual risk profile. However, if the company’s risk profile has changed for
structural reasons, a long data history may be problematic.
Third, what data frequency should be used to calculate the return data? Daily return data
is more granular than weekly return data and provides more data points, but it can result in
outliers which distort the least squares regression. Consider, for example a company which
has a positive earnings announcement on a day when the market declines significantly. As
any residual from the line of best fit is squared, outlier data points have an outsized impact
on the slope of the line of the best fit. In this case, the slope will be much lower than it
should be (ie the company’s beta will be artificially low). With weekly return data, these

77
outlier data points will be less extreme and hence will not have the same distortionary
impact.
My preference is to use market indices which I believe are suitable market proxies and
calculate betas over two different time periods: 6 months and 2 years. For the 6-month
beta, I prefer using daily returns so that there are sufficient data points to perform the
regression analysis. For the 2-year beta, I prefer using weekly returns to mitigate the
impact of outliers.
If there is a big difference between the two betas, a decision needs to be made as to which
beta provides the best risk measure. The 2-year beta is usually less noisy but the 6-month
beta might be superior if structural forces are driving the change. This determination is
better suited to a discretionary investment process than a systematic quant process.
No matter what calculation methodology is used, beta can be a misleading company risk
indicator for companies with volatile returns which are largely uncorrelated with the
market. Consider, for example, a gold mining company. The company’s returns will be
highly correlated with the gold price and changes in the gold price are typically uncorrelated
with changes in the market. Gold mining companies, therefore, typically have low – and
sometimes negative – market betas, including small capitalization gold companies which
most investors would consider to be extremely risky.

Stock return Volatility


Stock return volatility is typically calculated based on the standard deviation of daily total
returns. As per the beta calculation, the data frequency and lookback period must be
selected. All other things being equal, more data points are preferable to less and hence the
choice of data frequency has a large bearing on the lookback period.

Granular stock return data, such as daily returns, provide the best measure of volatility.
Weekly returns can mask severe daily return swings. The biggest price moves tend to occur
from day-to-day, rather than intraday, and hence it is not necessary to use a return
frequency of less than one day.
Although the standard deviation methodology squares deviations from the mean, the stock
returns are not directly compared to market returns (unlike beta) – and hence outliers are
less problematic. This also favors the use of daily total returns over weekly total returns.
Assuming the volatility calculation is based on daily total returns, volatility can be measured
over periods as short as one month. Longer return periods, however, are required to
mitigate or dilute the impact of stock specific events. Very long time periods such as 5
years, however, are also problematic as company risk profiles do change over time and
we’re more interested in the current risk profile.
I prefer to use daily returns over the last 12 months when calculating stock return
volatility. It is standard practice to generate an annualized return volatility which is
adjusted based on the data frequency. This is done my multiplying the return volatility by
the square root of the number of observations over a 12-month period. In the case of daily

78
return volatility, this necessitates multiplying the standard deviation by the square root of
250 (as there are approximately 250 trading days in one year).
Some investors prefer to focus on downside volatility. This is also measured based on the
standard deviation of returns but only data points which are below a pre-defined threshold
are included in the calculation. The rationale is investors aren’t concerned about large
positive returns – obviously they welcome them – but are very concerned about large
negative returns. When using daily data, however, typically there typically isn’t a big
difference between “normal” volatility and downside volatility.

Other Risk Factors


Another commonly used measure of risk is the high to low drawdown. As the name
implies, this represents the difference between the high and low price over the
measurement period. This is a simple and intuitive measure of risk that is easy to relate to.
The main disadvantage of high to low drawdown is it is significantly correlated with absolute
return.
Additional quant factors can also be used to assess downside protection. Examples include:

 Certainty Factors – As previously discussed, these factors measure the historical


stability and the forecast variability of financial data. Companies with historically
stable earnings and revenues which have a tight dispersion of analyst forecasts are
likely to be relative low risk (and vice versa).
 Cash to Market Capitalization –A large net cash holding relative to market
capitalization can provide significant downside protection. In some countries, most
notably Japan, it is not uncommon for companies to have net cash holdings which
represent more than 50% of the company’s market capitalization.
 Other Debt Safety Factors – Other debt safety factors which measure the amount of
debt a company has on its balance sheet and the company’s ability to service its debt
can also be used to assess risk.
 Industry – Ideally a thorough analysis of a company’s business model would be
undertaken. Quants aren’t able to do this but they can review industry exposure
down to a granular level using the GICS hierarchy (GICS is a well-known sector and
industry classification taxonomy). A company that is in a structurally challenged
industry such as Broadcasting may be more risky than the historical data suggests.
Further, companies in industry groups such as Casinos & Gaming often face
regulatory risks that may not be captured by the data.

79
Get Rich Quick!
If you were immediately drawn to this Appendix then be warned: there is no guaranteed
way to get rich quickly with any investment strategy. This should be obvious to everyone
but it is surprising how many people are enticed by investment schemes that promise great
riches. If they did work, the promoters would not be publicizing them. They would be doing
the exact opposite – closely guarding their investment secrets so they alone can benefit.
This brings to one quant fund which does seem to have found the secret to investment
success: Renaissance Technologies’ Medallion Fund. The story behind this fund and the
various masterminds who devised the investment process is documented in Gregory
Zuckerman’s book, The Man Who Solved the Market: How Jim Simons Launched the Quant
Revolution.
The book is an excellent read but it does not provide any specific insights into the inner
workings of the Medallion Fund. The fund’s investment methodology is a closely guarded
secret. What is known, however, is the fund’s capacity is limited. This is the reason why the
fund only manages money on behalf of employees and associates of Renaissance
Technologies.
Presumably, the fund utilizes high frequency trading algorithms that analyze fund flows and
can generate a very large number of trades which, on average, generate excess returns.
It does appear that the architects of the Medallion Fund have found the secret to get rich
quickly, but we can regard it as the exception that proves the rule. The fund
demonstrates that it’s not possible to get rich quickly with an investment strategy that is
in the public domain. Get rich quick schemes are always fraudulent.
It should also be noted that Renaissance Technologies manages quant factor funds which
have much higher capacity and are open to external investors. These funds have not
performed nearly as strongly as the Medallion fund.
Although there is no secret to investing and no individual investment technique or style that
you can adopt that will always work, there are things you can do to maximize your
probability of success. That is the best you can hope for.
It is also important to remember that in the short term, investment success is more about
luck than anything else. Often the worst outcomes result from when investors get lucky,
think they’re more knowledgeable and skillful than everyone else (ie hubris sets in), and
then keep doubling down on losing investments. Without doubt, hubris is one of the
reasons why many investors fail. As an investor, it is important to stay humble and admit
that the market is a formidable opponent that can’t always be beaten. And if you do enjoy
short-term success, remember that it is not due solely to your incredible investing acumen.
There are lots of books and internet sites that describe the habits and characteristics of
successful investors. There are also lots of things that are important – for example,
discipline and persistence - but there is one thing that is probably more important than
anything else: knowledge.

80
The Great Quant Unwind
Quant factor investing has had its ups and downs. Overall, it has been a robust investment
style, but it has had periods of severe underperformance. Most recently, since the start of
2018 many quant factor funds, particularly those with a value bias, have performed poorly.
What occurred in August of 2007, however, has no parallel. The damage was done over 3
days between August 7 and August 9 – and it was severe. Quant factor funds suffered
extreme losses over a few days that would normally only occur during a worst-case scenario
over a few years. Quant factor performance then rebounded on August 10 and drifted
upwards over the remainder of the month.
David Viniar, Chief Financial Officer of Goldman Sachs, stated that: “we were seeing things
that were 25-standard deviation moves, several days in a row”. While this statement was
an exaggeration – 25 standard deviation moves are essentially impossible – there is no
doubt that it was unprecedented and no-one could have reasonably predicted such an
event.
Usually the catalyst for extreme events is a sudden and severe change in market sentiment
that drives an extreme reversal in equity prices. This did not occur during the days in
question. Indeed, equity markets rose slightly during the days when most of the carnage
occurred.

Why Did it Happen?


The Great Quant Unwind was a liquidity driven event. The initial catalyst was probably one
large market neutral quant fund or trading book managed by a proprietary trading desk
unwinding its positions. The exact details are unclear as no-one was able to pinpoint where
the original trade unwinds came from.
This unwind created a cascade effect that ultimately spread more broadly to all quant
factor funds and portfolios. Many quant fund managers either:

 were forced to unwind positions due to margin calls,


 unwound positions as a result of systematic risk controls such as stop-loss triggers,
or
 overrode their investment process by reducing position sizes to mitigate losses and
to allow them to re-assess the situation.
The forced unwind of positions was the most likely a cascade effect. Quant factor
investing enjoyed strong returns over the 7-year period leading up to August 2007, but the
outperformance was starting to wane. The decline in returns, combined with generally low
factor return and market volatility, encouraged my quant funds to employ aggressive
leverage to achieve their return targets. It is not by accident that the unwind took place at
the end of the so-called Great Moderation (a period of low and declining volatility from the
mid-1980s to 2007). This made the funds more vulnerable to forced unwinds as they had
limited collateral to support the high leverage.

81
None of this would have happened if there wasn’t a huge overlap in the holdings of quant
factor funds. Although quant funds claim to have their own unique (and superior)
investment processes, the reality is the factors feeding into their stock selection models
tend to choose the same stocks and this can lead to crowded long and short positions.

What are the Lessons?


There were numerous lessons to be learnt from the Great Quant Unwind:
1. High leverage is dangerous. Funds that did not employ high levels of leverage were
able to ride out the storm without being forced to unwind positions and lock in
losses.
2. Stop-loss triggers are a naïve form of risk control that often produce an
unfavorable outcome. Investors should understand why they are losing money and,
as a general rule, they should not unwind positions if the adverse price moment is
based on unusual liquidity flows rather than a change in fundamentals.
3. Historically calibrated risk models can be flawed. Although what occurred in
August 2007 was not a 25 standard deviation event, it was an event that, not only
was unprecedented, was nothing like anything that had happened before. Value at
Risk (VAR) models failed dismally in the face of such an event.
4. Be wary of crowded trades. The unwind would not have been so dramatic if quant
factor funds were not invested in the same stocks.
5. Don’t panic following liquidity driven price moves. Based on anecdotal evidence,
many quant managers panicked after incurring sharp losses that wiped out years of
gains. As a result, they reduced position sizes out of fear. It was clear that the
underperformance was liquidity driven – and liquidity driven price distortions do not
persist indefinitely. They are usually short-lived and if you’re not forced to unwind
positions due to margin calls, the best course of action is to ride out the storm and
benefit from the inevitable reversal in share prices.

Could It Happen Again?


In financial markets, history has a habit of repeating. Investors tend to have short
memories. The mindset that this time it’s different also facilitates the reemergence of
market distortions that often precipitate a crisis.
Nevertheless, the Great Quant Unwind was a unique event. Liquidity driven market
dislocations are relatively common, but not of this magnitude.
A similar event could probably only happen again if the following criteria are met:
1. Quant factor funds perform strongly for a sustained period, attracting substantial
inflows
2. Recent factor performance starts to deteriorate gradually, resulting in funds seeking
greater leverage to maintain strong headline performance
3. The market environment is supportive of leverage based on low levels of market
volatility and increased investment bank tolerance for risk.

82
Not only is the first criteria not being met, the exact opposite has been occurring. Quant
factor funds have generally been performing poorly and this has resulted in substantial
outflows.
We are currently a long way away from the market environment which preceded the
Great Quant Unwind. Quant fund managers have plenty of things to worry about – but this
is not one of them.

83
Investment Dictionary
Active Funds Management
An investment style where the fund manager attempts to beat the market benchmark. This
style of investing can be contrasted to passive management where the manager attempts to
track the return of the fund’s benchmark.
Notes
 Most active fund managers fail to beat their benchmark over the medium term. This
observation is not based on any academic construct such as the Efficient Market
Hypothesis, but simple mathematics. Beating the benchmark is a zero-sum game.
Put simply, one investor’s gain must be offset by another investor’s loss. Actively
managed funds which charge fees and incur other turnover related expenses must,
on average, underperform. Retail investors and some actively managed funds can
potentially incur disproportionately large losses, resulting in the median active fund
manager beating the benchmark in the short-term. Over time, however, the
majority of active managers underperform on an after-cost and after-fee basis.

Alpha
The excess return over the fund benchmark.
Alternative Definition
The excess return over an appropriate benchmark based on the fund manager’s investment
universe, investment style, and the risks taken to generate the fund’s returns. In other
words, the return that can’t be explained by common risk factors.
Notes
 The problem with the definition used by most fund managers is it assumes they are
measuring their performance against an appropriate benchmark.
 Alpha, properly defined based on the Alternative Definition, is a true measure of the
fund manager’s skill. If a fund manager uses the S&P500 as the benchmark but
employs leverage or solely invests in high beta stocks, the benchmark should be
adjusted for the fund’s elevated risk profile when calculating the manager’s alpha.
 Investors should only pay performance fees for alpha. Too often this is not the case,
the best examples being funds that don’t have a hurdle for calculating performance
fees but have significant market exposure.

Alternative Data
Non-standard data sets such as satellite imagery and geolocation data.
Notes
 Market data, company financials data, analyst forecast data and event data such as
insider trades and buybacks can be considered “standard” quant data. Anything
else, especially difficult to source data, can be considered alternative data.
 Refer to the Using Data Appendix for more information.

84
Arbitrage Pricing Theory (APT) Model
A linear multi-factor model that includes the expected return as the dependent variable and
multiple undefined factors as the independent variables.
Alternative Definition
The template of a linear multi-factor quant model that could be useful if the factors were
defined. All it really says is the expected return depends on multiple unspecified factors, as
opposed to just one factor (systematic risk) used in CAPM.

Behavioural Bias
An illogical prejudice, often subconscious, that influences investment decisions.
Notes
 A good factor quant stock selection process should specifically target behavioural
biases that result in mispricing opportunities. This is one of the key safeguards
against data mining.
 Behavioural biases result in:
o investors overestimated the probability of high growth companies meeting
growth estimates, driving the outperformance of value factors;
o investors selling winners to lock in gains and holding on to losers to recoup
losses, driving the outperformance of momentum factors;
o analysts herding with their forecast revisions, resulting in trending in forecast
revisions and trending in share prices.
 A robust multi-factor stock selection process can specifically target behavioural
biases that cannot be exploited with a single factor. A good example is combining
long-term price distress, value and short-term earnings revisions and price
momentum factors to identify credible turnaround plays which are shunned by
investors due to reputational risks.

Beta
Refer to Market Beta.

Bid-Ask Spread
The Bid is the highest price that investors can immediately sell a security. The Ask is the
lowest price that investors can immediately buy a security. The difference is the bid-ask
spread.
Notes
 The bid-ask spread can significantly impinge upon the after-cost performance of
short-term trading strategies.
 It is difficult to model the impact bid-ask spreads will have on performance as it is
hard to predict how other investors will react to trading flows.

85
Buyback
When a company purchases its own shares.
Notes
 Rules governing stock buybacks vary by country. In some countries, companies can
announce a buyback but not actually purchase any shares. In other countries, share
purchases are only announced after the trades have been executed.

Calendar Anomaly
A mispricing anomaly that is associated with a day of the week, time of the month or time of
the year.
Note
 Three of the best know calendar anomalies – tax-loss selling, window dressing and
the January effect - have implications for the performance of value and momentum
factors. They tend to boost the performance of momentum factors towards the end
of the year and increase the performance of value factors at the start of a new year.
 Calendar anomalies are relatively easy for investors to monitor and exploit. Hence
the likelihood of any excess factor returns being arbitraged away is greater than
usual. Recent backtests indicate that investors need to be positioned for the January
effect in late December and that the outperformance of value factors only occurs
over a few days at the start of the year

Call Option
The right, but not the obligation, to buy a security at a specified price before a pre-
determined date.

Capital Asset Pricing Model (CAPM)


A single factor model and the first formal model developed by academics for predicting
stock returns. Based on the model, the expected return equals the risk-free rate plus the
stock’s beta multiplied by the difference between the expected return of the market and
the risk-free rate.
Alternative Definition
An overly simplistic model that is based on unrealistic assumptions, is not supported by
empirical data, and has little relevance in the real world.

86
Carry Trade
Borrowing in a relatively low yielding currency and investing in a relatively high yielding
currency to generate a positive return (assuming no currency movements).
Note
 More broadly, “carry” refers to the cost or return associated with a trade, assuming
no change in the investment parameters. For example, borrowing money at an
interest rate below the dividend yield paid by a stock results in a positive “carry”.

Certainty Factors
The historical stability and the forecast variability of financial data.

Closed-End Fund
Managed funds where the number of shares in the fund is fixed and the share price varies
depending on the normal market forces of supply and demand.

Coefficient of Variation
A measure of dispersion of data points around the mean. It is calculated by dividing the
standard deviation by the mean.
Notes
 The coefficient of variation is also referred to as relative standard deviation as it
facilitates comparisons across different data sets. For example, it is possible to
compare the dispersion of analyst revenue forecasts for a company which has a
revenue forecast measured in millions of dollars with a company which has revenue
forecasts measured in billions of dollars.
 The Coefficient of Variation can be misleading when the mean is negative or close to
zero, which is why it is not typically used to measure the dispersion of analyst EPS
forecasts.

Covered Call
An investment strategy where the investor sells call options in a security and owns an
equivalent amount of the underlying security.
Notes
 Selling covered call options provides income from selling the call option. It is a valid
investment strategy if you think the security’s price is unlikely to change significantly.
 Future upside is capped at the call option strike price.
 There is no downside protection, other than the income received from selling the
call option.

87
Correlation Coefficient
A measure of the strength of the relationship between the relative movement of two
variables.
Notes
 When backtesting quant factors, the two variables are the expected return and
factor score.
 The correlation coefficient varies between +1 (perfect positive correlation) and -1
(perfect negative correlation).
 Quants don’t care if the correlation coefficient is positive or negative; they’re only
interested in the absolute value.

Crowded Position
A long or short position in a stock in which lots of investors who share a common
investment thesis hold the same position.

Data Mining
The process of finding patterns and correlations in historical data sets with the intention of
being able to predict future outcomes.
Alternative Definition
A potentially dangerous process that can uncover spurious correlations between factor
scores and subsequent stock returns if a large number of iterations are performed to
identify the best outcome.
Notes
 Unlike in other disciplines, in the world of quant factor investing, data mining is not
viewed favorably. It is also referred to as overfitting and data snooping.
 Precautions against data mining include having a common-sense overlay, performing
in-sample and out-of-sample testing, and avoiding having excessive layers of
complexity.

Day Trading
A type of trading involving intra-day strategies, typically based on share price and volume
data.
Alternative Definition
A potentially dangerous short-term trading strategy, particularly if it based on analyzing
patterns on share price charts.

88
Decay Profile
A representation of the extent to which the strength of the factor’s predictive power
changes or degrades over time.
Notes
 Some quant factors exhibit strong short-term predictive power that quickly decays
(eg short-term momentum factors). Other factors perform relatively poorly over the
short-term but their predictive power only dissipates slowly over time (eg value
factors).

Debt Safety Factors


Factors measuring the amount of debt a company has on its balance sheet and the
company’s ability to service its debt.

Diluted Data
Per share data (such as share price and DPS) which is adjusted for corporate actions such as
stock splits and rights issues. Diluting per share data is required so that meaningful time
series comparisons can be made.

Discounted Cashflow Methodology


A valuation methodology which discounts future cashflows back to today’s dollars using an
appropriate discount rate.
Notes
 There are numerous ways to configure a discounted cashflow model when valuing
equities.

Disposition Effect
investors’ tendency to sell shares that have gone up in value to realize a gain and hold on to
shares that have gone down in the hope of recouping the loss.
Alternative Definition
I prefer Peter Lynch’s definition from his book One Up on Wall Street: “watering the weeds
and pulling out the flowers”.
Notes
 The disposition effect is the primary reason put forward for the outperformance of
momentum factors.

Dividend Drop-Off
The share price fall on the dividend ex-date (in excess of the overall market) relative to the
dividend payment.

Dividend Ex-Date
The date investors are no longer entitled to a dividend payment.

89
Dividend Run-Up Anomaly
The tendency of shares to outperform leading into the dividend ex-date.

Notes
 The outperformance is greatest for companies with relatively high dividend yields.
 The outperformance occurs over a period spanning approximately 30 calendar days.

Discount Rate
The rate used by discounted cashflow methodologies to discount future cashflows back to
today’s dollars.
Notes
 The decline in interest rates and bond yields has greatly reduced the discount rate
used by investors when valuing equities. This has had a disproportionately large
impact on stocks which potentially have very large future cashflows (ie high growth
stocks).

Downside Volatility
The standard deviation of returns below a pre-defined threshold.

Dual-Listed Company
Companies listed on more than one stock exchange.

Earnings Momentum
Another term for Earnings Revisions.

Earnings Revisions
Factors measuring changes in analyst earnings forecasts.
Notes
 Earnings Revisions is one of the best-known quant factors. The rationale
underpinning the predictive power of the factor is strong, as are the backtest results.
 A detailed analysis of Earnings Revisions is provided in this book. This includes the
reasons why analyst forecasts trend and different methodologies for calculating
changes in analysts’ earnings forecasts.

Earnings Surprise
A company earnings result which either exceeds or is below market expectations.
Notes
 The most common way of measuring earnings surprises is to divide the difference
between reported EPS and the consensus forecast EPS by the standard deviation of
analyst estimates. This is known as Standardized Unexpected Earnings.
 An alternative approach is to measure the market’s reaction to the earnings
announcement. The advantage of this approach is it takes into account earnings
quality.

90
Efficient Market Hypothesis
A theory which states that securities are efficiently priced in that they reflect all available
information. As a result, it is not possible for investors to consistently generate returns in
excess of the overall market on a risk-adjusted basis.
Alternative Definition
A theory which naively claims that a large number of investors will ultimately price securities
at their fair value. There are too many pricing anomalies to give credence to this claim.
Notes
 The focus should be on the fact it is extremely hard to beat the market over the long-
term when you’re competing against a large number of investors who are all seeking
to exploit mispricing opportunities. While the theory doesn’t stand up to scrutiny,
the main conclusion does.
 Many academics now concede that the market may be macro-inefficient, a concept
first proposed by the famous economist Paul Samuelson in 1998, meaning that the
overall market may not always be efficiently priced .

Enterprise Value
A company’s market capitalization plus net financial debt.

Factor Timing
Adjusting the weights assigned to factors in anticipation that some factors will perform
better than others based on the market environment.
Notes
 Factor timing tends to focus on momentum and value factors.
 Timing the performance of value and momentum factors based on the tax-loss
selling, window dressing and January effect calendar anomalies has strong intuitive
appeal and is supported by backtest results.
 Tactical factor timing models should be viewed with extreme caution. The
challenges posed by over-fitting and market structural changes are difficult to
overcome.

Fiscal Year-End
The period for which a company reports its financial results.
Notes
 Fiscal year-ends vary within countries and, even more so, across countries.
 Pro-Rating methodologies can be used to adjust historical and forecast data so that
meaningful comparisons can be made.

91
Fund Capacity
The maximum amount of money that a fund can invest while still satisfying the fund’s return
objectives.
Notes

 Fund managers frequently overestimate fund capacity, either because they want to
maximize fees or because they do not fully understand the extend to which market
impact costs can have on performance.
 Fund capacity is not a binary concept. This would imply that fund managers can
invest up to a pre-defined threshold without any problems and then beyond this
threshold the ability to outperform is severely constrained.
 For a detailed analysis of fund capacity, please refer to the Market Impact Costs and
Fund Capacity Appendix in this book.

Fundamental Law of Active Management


IR = IC * sqrt(Breadth)
IR = Information Ratio and IC = Information Coefficient. This law states that, based on
numerous assumptions, risk-adjusted performance (IR) is a function of the predictive power
of your investment process (IC) and the number of independent “bets” (Breadth).
Notes
 It should be noted that Breadth assumes that portfolio positions are uncorrelated.
This is not the case and correlations can increase during periods of market stress.
Nevertheless, the general concept is still valid.
 For quant factor investors, this is probably the most important formula that drives
the decision-making process. It’s the reason why (good) factor funds do not take
concentrated bets and why they seek to generate returns primarily from their stock
selection process.

Gap Risk
The risk associated with a security’s price moving from one price to another without trading
in continuous increments between the two price levels.
Notes
 Gap risk can result in stop-loss orders incurring greater losses than the unwind price
would imply.

Global Industry Classification Standard (GICS)


A commonly used sector and industry classification taxonomy that comprises four levels:
sector, industry group, industry, and sub-industry. There are 11 sectors and over 150 sub-
industries.

92
Great Quant Unwind
Severe and unprecedented losses incurred by quant funds and quant portfolios in August
2007 due to the unwinding of crowded stock positions.

Growth Factors
Growth factors measure the extent to which a company’s financial data (eg EPS) is trending
up or down and the nature of the trajectory.

Hedge Fund
A managed fund which tends to use aggressive or sophisticated portfolio construction
techniques to achieve a risk and return outcome which is different from traditional long-
only funds. These funds typically have more flexibility than mutual funds in terms of what
they can invest in and how they invest.

High Frequency Trading


A systematic trading method based on analyzing trading flows that is characterized by
extremely short holding periods and high trading volumes.

Hubris
A character flaw associated with excessive self-confidence.
Notes
 Hubris is very dangerous when making investment decisions. It is often magnified
after a period of success and can lead to an “I’m right and everyone else is wrong”
mentality. This can result in investors doubling down on losing investments.
 The best investors admit that the market is a formidable opponent that can’t always
be beaten.

Implementation Shortfall
Difference between the price when the trading decision is made and the execution price.

Index Fund
A fund that seeks to replicate the performance of a particular index.

Information Ratio
The annualized excess return divided by the annualized standard deviation of excess
returns. It is a commonly referenced risk-adjusted performance measure.

93
Insider Trades
Trades executed by company “insiders” such as directors and substantial shareholders.

Notes
 The most common definition of insider trades is illegal trades based on material non-
public information.
 It is obviously important to distinguish between the definition used by quants and
the more commonly used definition.

January Effect
Investors tendency to embrace risk at the start of a new year.
Notes
 This can result from investors reversing trades based on prior tax loss selling and
window dressing.
 At the start of a new reporting period, investors can also buy underperforming
stocks and still have time for any potential turnaround to materialize prior to the end
of the reporting period.

Lookahead Bias
Using data in a backtest that was not available at the run date.
Notes
 This issue is particularly pertinent when using factors that have been calculated using
data from a company’s financial statements given there is a considerable lag
between a company’s period end and when it reports its results.
 A lookahead bias can also occur with financial data when a company restates its
results.

Market Beta
The sensitivity of a security’s return to the return of the overall market. This analysis is
performed by performing a least squares regression with the market return being the
independent variable the security return being the dependent variable. The beta is
represented by the slope of the line of best fit.
Notes
 Beta varies depending on the choice of market proxy (eg S&P 500 Index versus Dow
Jones Industrial Average), the lookback period (eg 6 months versus 2 years) and the
return frequency (eg daily versus weekly).
 There are other measures of risk, such as return volatility, that complement beta as a
risk factor.
 Different stock betas can be calculated such as an Oil Price Beta which measures the
sensitivity of a security’s returns to changes in the oil price.
 For a comprehensive analysis of beta, please refer to the Risk, Beta and Market
Volatility Appendix in this book.

94
Market Impact Cost
The price impact associated with executing a trade.

Notes
 Market impact costs are greatest for large trades relative to overall market trading
volumes.
 For institutional investors, market impact costs are often the largest component of
transaction costs.
 Modelling market impact costs is notoriously difficult as it is hard to predict how
other investors will react to trading flows.

Market Neutral Fund


A fund which aims to generate returns independent of market moves.

Momentum Factors
Factors measuring whether a company’s share price is going up or down and the nature of
the trajectory.
Notes
 Some quants classify analyst revisions factors as momentum factors (I classify them
as sentiment factors).

Normal Distribution
A symmetrical continuous probability distribution where 68% of the observations lie within
1 standard deviation of the mean, 95% lie within two standard deviation of the mean, and
99.9% lie within 3 standard deviations of the mean.

Optimizer
An optimizer is a process or program that finds the best portfolio out of all possible
portfolios based on pre-defined constraints that maximizes expected return for an expected
level of risk.
Notes
 Optimizers sounds great in theory. Unfortunately, the inner workings of optimizers
are predicated on numerous simplifying assumptions which aren’t valid in real life.
This is one of the reasons why it is important to impose constraints on the
optimization process to get realistic solutions.
 Changing optimization constraints can result in vastly different portfolios and it’s
often not possible to determine with any precision why this is the case.
 Another issue with optimizers is risk factors can change and temporary risk factors,
which could not have been foreseen when the risk model was calibrated, can have a
significant impact on stock returns.

95
Out-of-Sample Testing
Backtesting performed with a different data set than was originally used during the design
phase to reduce the risk of data mining.
Notes
 Out-of-sample testing can be performed within the same market or in a different
market (or markets)
 Out-of-sample backtesting in the same market requires a lengthy data set.
Preferably, both in-sample and out-of-sample testing should be performed over
different market cycles.
 Out-of-sample testing in different markets works well if both the in-sample and out-
of-sample markets are comparable in terms of their size, composition, and maturity.
This tends to work well in Europe, particularly among eurozone countries.

Pairs Trading
Pairs trading is a trading strategy that involves analyzing returns for two highly correlated
stocks and taking a long position in the underperforming stock and a short position in the
outperforming stock when the share price divergence reaches a pre-defined level.
Notes
 Pairs trading is sometimes referred to as statistical arbitrage.
 Only stock pairs whose returns are highly correlated and where there is a reason for
the returns to be highly correlated (eg dual-listed companies, companies in the same
industry) should be included in the investment universe.
 The rationale for pairs trades is based on liquidity flows. Sometimes unusual
liquidity flows, as opposed to a change in fundamentals, result in relative
outperformance and underperformance. Pairs traders hope that this relative share
price divergence will be transitory and that they can profit when the share prices
revert to historical averages.
 Pairs traders should ensure that any share price divergence that triggers a trade is
not driven by an event (such as an earnings surprise) which could result in the
relative share price differential persisting or becoming more extreme.

Passive Portfolio Management


Passive portfolio management is an investment style that seeks to replicate an index.

Payout Ratio
The percentage of a company’s earnings paid as dividends to shareholders.

Performance Fee
The fee paid by the investor to the fund manager as a result of the manager outperforming
the fund’s benchmark, if any.

96
Alternative Definition
A fee which should only be paid when the outperformance is measured relative to an
appropriate benchmark over a period of at least 12 months.
Notes
 In the short-term, it is very difficult to distinguish between luck and skill and
investors should never pay performance fess over a period of less than 12 months.
 The asymmetric pay-off structure inherent in performance fees can encourage
undue risk raking.

Portfolio Construction
Portfolio construction is the process of constructing a portfolio of securities in an optimal
way that maximizes return for a given level of risk.
Notes
 Generally speaking, quants don’t spend enough time on portfolio construction. Too
often poor risk controls, insufficient breadth and/or simplistic stock weighting
methodologies undermine the efficacy of the stock selection process.

Portfolio Rebalancing
Adjusting position sizes to maintain the appropriate return and risk factor exposures while
minimizing transaction costs and maximizing after-cost and after-tax returns.
Notes
 Minimizing transaction costs doesn’t mean reducing trading frequency. Small
frequent trades are usually preferable to large and infrequent trades.
 Trading based on noise rather than information content can occur when changes in
quant factors are driven by stale data.
 Rebalancing involves a trade-off between the opportunity cost of not trading and the
cost of trading.

Post-Earnings-Announcement drift
The tendency of companies to continue outperforming following a positive earnings
announcement and continue underperforming following a negative earnings
announcement.

Price Distress
The current share price divided by the high share price over the measurement period.

Pro-Rating
Adjusting historical and forecast data by taking different slices from different fiscal years so
that the same time-period is used for all companies.

97
Put Option
The right, but not the obligation, to sell a security at a specified price before a pre-defined
date.

Quantamental Investing
Combing quant factor analysis with fundamental analysis.
Notes
 Quantamental investing typically involves screening for stocks based on quant factor
scores and then performing fundamental analysis on the stocks that satisfy the
screens.
 Quant factor screens, particularly value biased screens, often identify stocks which
look too good to be true. This includes companies which have extremely low
earnings multiples and companies which, courtesy of large net cash holdings, look
extremely cheap on an enterprise value basis. These companies often require more
detailed fundamental analysis to determine whether they are attractive investment
opportunities or value traps.
 I believe that all fundamental investors should at least be aware of the quant factor
scores of the companies they analyze. Does the company have positive or negative
earnings revisions? Is it cheap or expensive based on a diverse range of valuation
metrics? How does it rate based on certainty criteria?

Quant Factor
Anything about a company which can be measured, has intuitive appeal and has exhibited
predictive power, either by itself or in combination with other factors.

Notes
 There are a lot of quant factors – which is why I classify them into 6 different groups.
 The term factor zoo, originally coined by John Cochrane in 2011, has been used to
describe the proliferation of quant factors.

Rank IC
A commonly used performance metric based on cross-sectional regressions of ranked factor
scores and ranked stock performance. The Rank IC represents the correlation coefficient
based on this regression. This number is usually averaged over the backtest period to
calculate the Average Rank IC.

Regression Analysis
A statistical method for analyzing the relationship between a dependent variable and one or
more independent variables.
Note
 For backtesting quant factors, the dependent variable is the expected return.
 A linear least squares regression is performed which squares the distance of data
points from the line of best fit (hence outliers can distort the analysis).

98
 As detailed in this book, multiple independent variables can be included in the
regression analysis to control for risk factor exposures and to see if a quant factor
exhibits predictive power beyond that encapsulated in other quant factors.

Return Volatility
The standard deviation of a security’s total returns.
Notes

 This can be calculated over different time periods using different data frequencies
(eg daily or weekly).
 Typically, it is presented as an annualized number by multiplying the standard
deviation by the square root of the number of observations in a year (eg 52 in the
case of weekly data).

Reversal Rate
The rate at which the negative side-effects associated with ultra-low and negative interest
rates exceed the benefits.

Risk-Adjusted Performance
The return after taking into account the level of risk that was taken to generate the return.

Sentiment Factors
Sentiment factors are factors calculated based on analyst data such as recommendations,
target prices and financial data estimates (eg EPS and DPS estimates)

Risk Arbitrage
An event driven trading strategy that attempts to exploit mispricing opportunities
associated with mergers and acquisitions.
Notes
 Typically, the trade involves taking a long position in the company being acquired in
anticipation of the merger or acquisition being completed or, even better, a superior
offer being made.

Shareholder Yield
The amount of money returned to shareholders (dividend payments plus share repurchases
minus share issuance) divided by the share price.
Notes
 In countries where buybacks are commonplace, Shareholder Yield is preferable to
Dividend Yield as a valuation factor.

99
Short Interest
The proportion of shares that have been sold short relative to the number of shares that are
freely traded.
Notes
 A high level of short interest can result in a short squeeze.

Short Selling
Selling shares in a company you don’t own. This is done by borrowing the shares from an
existing shareholder so that they can be sold on the market. To complete the process, the
shares need to be bought back and returned to the stock lender. The short seller needs to
pay the lender a fee for the privilege of borrowing the security.
Notes
 Short selling is dangerous for two related reasons: losses are potentially unlimited
and the size of the position increases as losses accrue.
 Gains are limited to the size of the initial investment.
 Short selling is best suited to sophisticated investors who have a high conviction
level or need to hedge long stock positions.

Short Squeeze
Forced covering (buying) of a short position due to a high level of short interest
accompanied by rapid share price appreciation.
Notes
 The catalyst for a short squeeze is typically a favorable event such as an earnings
surprise which causes the share price to rise. This then forces short sellers to buy
stock to mitigate their risk exposures. When there are a large number of short
sellers (ie the level of short interest is high), this can result in further share price
appreciation and further buying by short sellers, resulting in a short squeeze.
 In rare circumstances, activist investors target stocks where there is a high level of
short interest, push up the share price and then cause a short squeeze. The best
example of an activist driven short squeeze occurred in January 2021 when a large
number of retail investors targeted GameStop, a stock which at the time had an
extremely high level of short interest. This caused the share price to spike and
forced numerous short sellers to cover their short positions.
 A short squeeze is a liquidity driven event which is typically short-lived.

Smart Beta Funds


Funds that select and weight stocks using a set of rules that are different to the market
capitalization approach used by most index providers.

100
Alternative Definition
Funds which are dressed up to look like sophisticated quant factor offerings, but which
typically have overly simplistic stock selection, portfolio construction and rebalancing
methodologies.

Smart Consensus Forecast


A forecast which has been adjusted from the mean consensus forecast in an intuitive way
which, based on backtests, improves the accuracy of the forecast.

Standardized Unexpected Earnings (SUE)


The difference between reported EPS and the consensus forecast EPS divided by the
standard deviation of analyst estimates.
Notes

 This measure of earnings surprises is superior to the percentage difference between


reported EPS and the consensus forecast EPS for two reasons:
1. It gives more weight to surprises where analysts forecasts are tightly grouped
together (and vice versa)
2. Percentage changes can be very noisy, particularly for companies where
either the reported EPS or the consensus forecast EPS is close to zero.

Statistical Arbitrage
Refer to Pairs Trading.

Stop-Loss Order
An order to unwind a position when the price reaches a specified level.
Notes
 Stop loss orders are typically a crude form of risk control and can be a great way of
locking in losses without being able to benefit from any subsequent price gains.
Nevertheless, they can be appropriate in some circumstances. Please refer to the
Stop-Loss Orders Appendix for more information.

Structured Securities
The U.S. Securities and Exchange Commission defines structured securities as "securities
whose cash flow characteristics depend upon one or more indices or that have embedded
forwards or options or securities where an investor's investment return and the issuer's
payment obligations are contingent on, or highly sensitive to, changes in the value of
underlying assets, indices, interest rates or cash flows".
Alternative Definition
An investment product, typically with an extremely high fee load, that is designed to
produce a pre-defined set of return and risk outcomes that are different to taking a long
position in a stock or a portfolio of stocks.

101
Survivorship Bias
Excluding stocks or funds from backtest data because they have been de-listed or don’t
currently satisfy the selection criteria.
Notes
 Survivorship bias can be particularly problematic when backtesting momentum
factors and strategies.

Systematic Risk
Risk which cannot be diversified away. It is represented by a stock’s market beta.
Notes
 Based on the Capital Asset Pricing Model, systematic risk is the only risk measure
that matters because non-systematic risk can be eliminated by holding a diversified
portfolio or securities. This claim is based on a host of simplifying assumptions.
 Other measures of risk, such as return volatility, are also highly relevant when
assessing a company’s risk profile.

Systematic Investing
An investment style based on an automated, rules-based investment process.
Notes
 A systematic investment process needs to be extremely sophisticated to generate
consistently strong excess returns. There are lots of competing systematic processes
and only the strongest survive and thrive.
 In the world of systematic quant factor investing, the industry has become
somewhat bifurcated. There are the very large quant fund managers which are
extremely well resourced and have access to sophisticated data sets and factors –
and there are the other fund managers. The amount of money being managed helps
level the playing field – given that large firms manage more money which makes it
more difficult for them to outperform – but it is still very hard for the less
sophisticated systematic investors to compete.

Tax-Loss Selling Anomaly


A mispricing anomaly that occurs at the end of the tax year due to investors holding on to
winners to avoid crystalizing a capital gain and selling losers to realize a capital loss.

Technical Analysis
Analyzing patterns in historical price and volume data to predict future stock returns.
Alternative Definition
Technical Analysis is a misnomer in that there is typically nothing technical about the
analysis. In particular, analyzing visual patterns on share price charts, such as “double tops”
and “head and shoulder” patterns, is superficial and pointless.

102
Total Return
The change in the diluted share price over two dates after including any dividends paid. For
example, if the diluted share price increases from $1.00 to $1.10 and the company paid a
dividend of $0.10, the total return would be 20%.

Terminal Value
The net present value of all future cashflows assuming a constant growth rate.
Notes
 The terminal value calculation assumes cashflows continue indefinitely and the
growth rate is constant
 If the growth rate exceeds the discount rate, the terminal value is infinite.

Tracking Error
The standard deviation of the difference between the portfolio and benchmark returns.

 Tracking error is usually presented as an annualized number by multiplying the


standard deviation by the square root of the data frequency (eg if monthly data is
used, the standard deviation is multiplied by the square root of 12).

Transaction Costs
The costs incurred as a result of buying and selling securities. Transaction costs comprise
brokerage, bid-ask spread, market impact, implementation shortfall and stamp duty costs.
Notes
 It is difficult to model some aspects of transaction costs, most notably market impact
costs. This often leads to investors underestimating transaction costs and
overestimating backtest results for short-term trading strategies.

T-Stat
A measure of statistical significance.
Notes
 When looking at a performance measure (such as the Rank IC based on correlation
analysis or the Excess Return based on portfolio simulations), quants want to
determine if the performance is statistically different from zero. The t-stat can be
used to make this determination.
 With a large number of data points, a t-stat of 2 indicates you can be 95% confident
that the performance measure is statistically significant. A higher number indicates
even greater confidence.

103
Value Factors
Factors comparing a company’s share price with any field or combination of fields on the
company’s financial statements.
Notes
 A value factor must include the company’s price in the calculation.

Value Traps
Companies which are cheap based on one or more value factors but which are cheap for a
valid reason such as poor earnings certainty and structural challenges.

Window Dressing
Fund managers’ tendency to improve the appearance of their portfolio holdings at the end
of their reporting period by selling underperforming companies and buying outperforming
companies.

Winsorizing
Capping outliers in a data series. This is typically done before normalizing factor scores to
limit their impact and ensure the final distribution of z-scores more closely resembles a
standard normal distribution.

Z Score
A normalized score which represents the difference between the raw score and the mean,
divided by the standard deviation of raw scores. The score represents the number of
standard deviations from the mean.
Notes
 Quants frequently convert raw scores into z scores before combining them.

104

You might also like