The Art and Science of Trading by Adam Grimespdf
The Art and Science of Trading by Adam Grimespdf
The Art and Science of Trading by Adam Grimespdf
Course Workbook
Detailed Examples & Further Reading
ISBN-13: 978-1-948101-01-1
ISBN-10: 1-948101-01-7
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
To my readers:
Your support for my work has touched me.
Your dedication and perseverance have inspired me.
Your questions have challenged me, and I’ve learned so much from you.
I am honored to be a part of your journey.
Thank you
“It’s what you learn after you know it all that counts.”
– John Wooden
Wooden
Forward (and how to use this book)
This is not a typical trading book. If you’re going to use it most effectively,
you need to know a few things about it.
Whitepapers
The whitepapers in Part II of the book have never been published in their
entirety, though some of the information found its way into various
presentations and blog posts I have done over the years. They give some good
examples of ways in which we can apply quantitative techniques to market
data. Hopefully, I’ve communicated some of the nuance involved with this
work, and stressed the need for humility—we never have firm, final answers
to most of our questions, and there’s always another way to consider the
problems involved. The last chapter in this section provides some solid
examples of quantitative tendencies that support the style of trading in my
first book and the online course.
There are several ways you can use the material in this book most
effectively,
effectively, depending on your experience and objectives.
As a Stand-alone Reference
The whitepapers in Part II can be read by themselves, in the order in which
they are presented. They will bring some challenges to some of the tools
traditionally used by many technical traders. It is necessary to reiterate a
point, here: the objective of these papers is not to disprove anything. In fact, it
is not the nature of scientific inquiry to think in those terms. Rather, we are
seeking evidence that these tools, which are purported to be very powerful,
offer a statistical edge in the market.
These tools, in my studies, do not show an edge, but there could be many
reasons for this. Perhaps the tests are flawed, perhaps the data was flawed,
perhaps the methodology missed something important, or perhaps the tools do
not have an edge. Regardless, these papers will give you some perspective on
the problems of technical trading, and may suggest some new directions for
your own investigations and research.
I hope you find this material interesting, useful, and fun. I have enjoyed
writing it for you, and I wish you all possible success in your trading
endeavors!
Adam Grimes
October 2017
New York, New York
Course Catalog
This is a list of modules and units from the online course, available at no
charge from MarketLife.com.
I. Chartreading 101
Introduction
Basic Principles
Basic Chart Setup
Charts: Going Deeper
Reading Price Charts
V. The Anti
The Anti
Quantitative Techniques II
What Works
Cognitive Biases
Trading System Design
Support & Resistance Project
II. Pivots and swings, support and resistance, basic patterns of
trend and range
13-18 (two forces intro, pivots)
19-21 (basic swing patterns)
49-64 (trends)
97-120 (ranges)
78-84 & 93-96 (trend analysis)
VIII. Risk
263-290 (risk)
T
his module focuses on some important foundational concepts that are
often overlooked. We begin with an investigation of what it means to
have trading edge and why it matters. Our goal in all of this work is
to focus on practical application, but to also supply enough theory to support
the work and to make sure that the trader understands the “whys” as much as
the “hows”.
This module also includes a solid look into price charts. Too often, traders
begin their work without truly understanding what the chart represents. Chart
display choices are made based on vague visual appeal, similarity to
something seen elsewhere, or a recommendation from a friend (who may or
may not know what he is doing!) Thinking deeply about the chart also leads
us to our next area of focus: chart stories.
I came up with the term “chart story” when I was working with beginning
traders. When we think in these terms, we imagine that every aspect of every
bar is important, and we try to understand the part every tiny detail plays in
the developing story of the market. (We must acknowledge right away that
this line of thinking is misleading because it does not respect the random
variation in the market. Its value is only as a training tool to help build solid
habits in chartreading.) This is one way to look at and to think about price
charts, and it lays a solid foundation for developing market feel down the
road.
The supplementary readings for this section also cover some thoughts on
the process of learning to trade and why it can be so difficult. Simply put, we
do not learn to trade at all—rather, we become traders. And that journey,
richly rewarding as it may be, is long, challenging, and fraught with danger.
The trader who understands this from the first steps is much better equipped
to succeed.
For the purpose of this exercise, assume that every bar has a story; your job
is to tell that story. Rather than worrying about being right or wrong, focus on
the thought process and inductive nature of this analysis. There really are no
wrong answers here, and you may even find value in doing these exercises
more than once. Finding interesting examples on your own would be another
way to extend the analysis.
If the chart has text, answer the question or do the specific analysis on the
chart. If there is no text, then write a separate explanation for each labeled bar
—in all cases, make sure that each bar designated with a label receives your
attention and a text explanation.
Adequate explanations will usually be 2-6 sentences long and will focus on
concepts such as:
The position of the open and close within the day’s range
The position of the open and close relative to each other
The range of the bar relative to previous bars
Consider each bar both alone and in relation to previous bars
Any “surprises” (This is a deliberately large category.)
Action around any obvious support and resistance levels. (This
is not an exercise in support and resistance, so do not focus on
this aspect.)
It may be useful to think in terms of large groups of buyers and
sellers driving the market, and the battle between those groups.
Section 3: Trendlines
The charts in this section are slightly more compressed and are presented
without commentary. Draw trendlines, following the rules from the module
(and pp. 84–86 of The Art and Science of Technical Analysis). A trendline
should:
Capture the swing low before the high of the trend in an uptrend
(and the reverse is true for a downtrend.)
Be attached as far back into the trend as possible, but capturing
the beginning of the trend may not be possible.
Not cut any prices between the two attachment points. A
trendline may cut prices after the attachment point (i.e., the
trendline was broken.)
Note the transitions into ranges and then the breaks into trending action,
and do whatever analysis you feel appropriate. Many charts will include
several trends, and you may find different definitions depending on the
timeframes you consider. As long as the trendlines are drawn according to the
rule set, they are valid.
These charts were chosen to be a mix of relatively straightforward and
more complex patterns.
Section 4: Trendline Research Project
A word of warning: done well, this is a big project. It will take some time,
but that time will be well rewarded!
The point of the study is to understand what happens as trendlines are
drawn in evolving market data. Ideally, you would use a software program
that would allow you go one bar at a time (to replicate, as much as possible,
the experience of having the chart form in real time.) As the market develops
on your screen, decide when and where to draw trendlines. (Review Module
3, Unit 4 from the course for guidelines on drawing trendlines.)
Once the trendlines are drawn, notice what happens when the market
touches them. Does the touch of the trendline hold? Does it indicate the end
of the trend? What happens if you combine it with bands and/or swing
analysis?
Start to think about how you might trade these structures.
You need to keep some type of records. Screencaps of your charts would be
one possibility, but it would also be a good idea to somehow score the
interaction with the trendlines.
This project is deliberately broad, but should encourage you to spend at
least several days investigating action around trendlines. Do not trust what a
book tells you—ask the market itself!
If you are not able to generate bar by bar charts, you may work in the
middle of a chart but try, as much as possible, to imagine the chart is being
revealed one bar at a time. You will not replicate the feeling of hidden
information, but you will draw consistent trendlines on correct pivots this
way. If you simply start drawing trendlines on charts, you will likely make
many mistakes on attachment points and pivots based on what trend
information was available at the time.
Remember: draw the trendline, and then see what happens “to the right” of
the correctly drawn trendline when the market engages the trendline.
Section 6: Readings
From The Art and Science of Technical Analysis: Market Structure, Price
ction, and Trading Strategies by Adam Grimes, Wiley, 2012:
85-92 (trendlines)
121-148 (between trends and ranges)
189-212 (indicators and tools for confirmation)
31-48 (market cycles and the four trades)
These readings will lead you deeper into the intricacies of the transitions
between trends and ranges, and will give you further examples of correctly
and incorrectly drawn trendlines.
On Trends
This pattern can be dangerous, and understanding it can save you a lot of
grief. I probably need a better name for it, but I’ve taken to just calling it
“slide along the bands”, because that’s what the market does. In bullet points,
here are the main concepts:
This pattern is the proverbial double-edged sword. On one hand, the market
can go much further than we might expect. When you are fortunate to be
positioned on the right side of such a move, the best thing to do is to focus on
trading discipline: maintain a correct stop and tighten that stop every 2-3 bars
as the market makes new extremes. Don’t look at P&L, and don’t over think.
Let the market tell you when to get out by hitting your stop.
However, this pattern does bring some unusual risks. When it ends, it often
ends in a volatile spike against the trend. We absolutely must respect our
stops, and we cannot be upset if we are stopped out in noise. When this
pattern ends, the market is probably going to become very emotional. The
market can be emotional; you, as a trader, cannot.
So many times, in trading, our entire job description boils down to one
simple directive: don’t do anything stupid. Don’t make mistakes.
Understanding this simple pattern can help you avoid many mistakes and
navigate this difficult trading environment.
On Developing a Style
Trend Continuation
Trend continuation plays are not simply trend plays or with-trend plays.
The name implies that we find a market with a trend, whether a nascent trend
or an already well-established trend, and then we seek to put on plays in the
direction of that trend. Perhaps the most common trend continuation play is to
use the pullbacks in a trend to position for further trend legs. It is also
possible to structure breakout trades that would be with-trend plays, and there
is at least one other category of trend continuation plays—those trades that try
to get involved in the very early structure of a new trend, before the trend has
emerged with certainty.
There is a problem, though: It is important to have both the risk and the
expectation of the trade defined before entry; this is an absolute requirement
of any specific trade setup, but it can be difficult with trend continuation
trades. The key to defining risk is to define the points at which the trend trade
is conclusively wrong, at which the trend is violated. Sometimes it is not
possible to define points at which the trend will be violated that are close
enough to the entry point to still offer attractive reward/risk characteristics.
On the upside, the best examples of these trades break into multileg trends
that continue much further than anyone expected, but the most reliable profits
are taken consistently at or just beyond the previous highs.
Trend Termination
More than any other category, precise terminology is important here. If we
were less careful, we might apply a label like “trend reversal” to most of the
trades in this category, but this is a mistake. That label fails to precisely define
the trader’s expectations. If you think you are trading trend reversal trades,
then you expect that a winning trade should roll over into a trend in the
opposite direction. This is a true trend reversal, and these spots offer
exceptional reward/risk profiles and near-perfect trade location. How many
traders would like to sell the high tick or buy the very low at a reversal?
However, true trend reversals are exceedingly rare, and it is much more
common to sell somewhere near the high and to then see the market simply
stop trending. Be clear on this: This is a win for a trend termination trade—
the trend stopped. Anything else is a bonus; it is important to adjust your
expectations accordingly.
Trend termination trades are countertrend (counter to the existing trend)
trades, and trade management is an important issue. Most really dramatic
trading losses, the kind that blow traders out of the water (and that don’t
involve options) come from traders fading trends and adding to those
positions as the trend continues to move against them. If this is one of the
situations where the trend turns into a manic, parabolic blow-off, there is a
real possibility for a career-ending loss on a single trade. For swing traders,
there will sometimes be dramatic gaps against positions held countertrend
overnight, so this needs to be considered in the risk management and position
sizing scheme. Perhaps more than any other category of trade, iron discipline
is required to trade these with any degree of consistency.
On Market Rhythm
I took the screenshot above in the middle of the day (10/6/14) when I
noticed that many of the grains were putting in large standard deviation up
days, which is another way to say that they were making large moves relative
to their own volatility. Here is also a case where the right tools can be helpful;
would you have seen that this was a significant day just by at the chart?
Maybe, but the panel below the chart quickly shows the significance of this
move. On a volatility-adjusted basis, this was the largest upward move in
nearly a year.
Now, this is only the first stage of analysis, but it is an important one. Over
the years, when I have worked with, coached, and trained traders, I used to
okingly call this “hey, that’s different!” In reality, it is not a joke. Noticing
that the dominant market pattern is shifting can be an important piece of
information.
The point of this is the concept, rather than the specific example here. Find
an obvious break in the existing market pattern, and then pay attention to
what happens afterward. So, what can be different? Here are some examples:
Largest volatility-adjusted move over a certain time period.
Obvious move that breaks a chart pattern.
Counter-to-expected breakout, but, again, it must be obvious.
Sudden, sharp reversal like a single day that reverses the
previous week’s movement.
Quiet market goes into an extended period of volatility, or vice
versa.
This is just a starting point; you can make a much longer list of things that
indicate market dynamics might be shifting. One key point: though this is a
simple concept and is simple to use, it must be based on things that really
work. If it is based on technical ideas that have no foundation in market
reality then you are only analyzing insignificant noise. Understand how
markets really move, how they usually move, and then—look for something
that breaks the pattern. Look for something that jumps out and say, “hey,
that’s different!”
Journal
This is the perfect time—today, right now, immediately—to start keeping a
ournal.
Though you can consider the issues of format and exactly what you want to
put into the journal later, this practice will be most effective if it is a routine
done consistently. Essentially, you want to make journaling a habit—a very
good, constructive habit that will ultimately play a big part in your success.
For the beginning trader, it’s sometimes confusing to know what to put your
in journal. If you aren’t sure what to write, write that. Write about your
feelings about journaling, your feelings about building habits. Write about
things in your life and world you want to change. Write about your
experiences trading. Write about your future trading, and what you think of
the work you’re doing in this course. Write about kittens. It doesn’t matter!
Just write a little bit, each day, and let this work evolve as you go along.
P&L Sheet
You also do need a P&L sheet that allows you to record at least the
following datapoints for each trade:
Date In
Price In
Price Out
Initial Stop
Before you can really do this work, you need access to historical charts and
some record keeping system; pencil and paper will work, but electronic
formats are much better. You then need to define the pullback pattern. This
can be difficult, because there is admittedly (and deliberately) some element
of discretion. Do not be discouraged. The way in which you see these patterns
will evolve and change, but it is the exposure to market data that will let you
evolve. This is truly a case where the only way you learn is by doing.
So, define the pullback pattern. What, specifically, will get you interested in
looking for a pullback? How will you define a strong enough or sharp enough
move to tell you that a pullback might set up? How will you monitor the
shape of the pullback as it develops? Where will you actually get into trades?
Where will you place that initial stop?
Take some time to answer those questions, and come up with a rule sheet
for pullbacks. (Write it down.) Then, go through some market data bar by bar,
recording key stats for each “trade”, and see how it works and how it feels;
your subjective sense or feeling is valuable. At the first stages, this exercise is
as much about you as it is about the market.
Like anything else, this process becomes easier the more you do it. To have
a valid test, you need a significant number of trades, but just get started with
the exercise this week.
Section 4: Readings
From The Art and Science of Technical Analysis: Market Structure, Price
ction, and Trading Strategies by Adam Grimes, Wiley, 2012:
65-77 (pullback intro)
154-169 (pullback detail)
291-315 (pullback examples)
385 - 388 (journalling)
The readings this week will help us to move from the purely theoretical,
high-level perspectives on markets to looking at applied trading patterns.
Seeing many examples of the pullback, and considering how to manage trades
that set up based on this pattern, will give the trader good ideas for continuing
to explore this aspect of market behavior.
On Journaling
These are some points that will get you started on this very important
practice
On Quantitative Techniques
That’s really it, and it’s not so intimidating: define a condition; test it, and
then look at the results. Whether you’re working with pencil and paper, a
spreadsheet, or working within a programming language, this technique of
asking questions and seeing what the data says will help you understand the
market better and find opportunities for profitable trading.
On Pullbacks
These are strong numbers that seem to show a tremendous edge in the
marketplace, but let’s dig deeper. We should, first of all, be on guard because
the effect is so strong—when we find something that appears to be a
statistical homerun, we’ve probably made a mistake somewhere. Let’s find
that mistake.
The first step is to ask if the results make sense. I would argue, right out of
the gate, that they do not. Why should such a strong effect exist? Perhaps we
could make a case for some kind of monthly momentum—maybe managers
tend to put money to work at the beginning of the month and that has some
persistence through the entire month. Maybe there is another reasonable
explanation. It is, at least, possible to make an argument, but we are already
suspicious because there is no clear logic driving these results.
However, something interesting happens if we examine day two of the
month; we find the same effect. No, not for day 1 and day 2 being a
cumulative decline for the month (though we can do that test, too), but simply
if day 2 is up, the month tends to be up. Also, day 3… and day 4…. In fact,
no matter which day of the month we examine, we find if that day is up, the
month tends to be up! If it is down, the entire month tends to be down.
So, now we have a problem. Can we make any possible argument to
explain this? How do we feel about the argument of monthly momentum or
managers putting money to work on, say, day 17 of the month? Obviously,
this is now completely illogical, so we must look elsewhere for an
explanation. Maybe we should turn our attention to the way we ran the test.
Maybe there is a problem with the methodology.
As an aside, I once knew a trader who had a system that was based on a
similar idea. He had done enough research to know that his system “worked”,
statistically speaking, no matter which day or days of the month (or quarter,
or week, or even which hour of the intraday period) used as a trigger. Over the
course of about a decade he had traded the system live, and had lost a
significant amount of money—high seven figures on this particular system.
You might ask why he kept going back to it, but the reason was (bad)
statistics. Every way (except the right way) he looked at the system and
twisted the inputs, the results were astoundingly strong, yet the system failed
to produce in actual trading.
This is not an academic exercise; statistical mistakes are not abstract. For
traders, statistics are life and death; statistics are the tool through which we
understand how the market moves. Bad statistics lead to bad decisions, and
bad decisions cost money.
Before we get to the methodological error, here are the “correct” statistics
for the “first day of the month” effect:
Based on these test results we can say we see no effect—that, whether the
first day of the month (or, in fact, any particular day) is up or down has no
significant effect on the overall direction of the month. There is no tradable
edge here, and, unless our test has missed something ((always a possibility—
stay humble!)) there is simply nothing here worth thinking about.
There is a simple solution to mistakes like this, and I’ll share it with you
later on. For now, spend some time thinking about where the error might be in
the method, why it matters, and why it might be hard to catch. Here’s a hint:
what is a monthly return?
From here on out, I hope to accomplish two things:
To understand how easily “future information” can contaminate statistical studies, a nd how
even a subtle bias can introduce serious distortions.
To suggest one simple condition—asking yourself if the statistic could possibly have been
executed in the market as a trade—can protect us from all errors like this.
The specific error around the first day of the month effect is a common
mistake. I’ve certainly made it myself in tests and analysis enough times to
know that it is something always worth checking for, and I’ve seen it in stats
people use for many technical factors like moving average crosses,
seasonality, trend indicators, buying/selling at 52 week highs, the January
effect, and many others.
This is a sneaky error, and it’s one I’ve made many times myself. Though
the test will work—thinking about whether the tendency was tradable on the
timeline—it takes some careful thought as mistakes are not always apparent at
first glance. Be ruthless in examining the information you use and be even
more vigilant with your own thinking. Bad statistics lead to biases and poor
decisions.
On Moving Averages
When you look at that chart, focus your attention on the bottom line, which
shows what the Russell 2000 did, relative to its unconditional (baseline)
return. For instance, looking at the entire history of the Russell, we find it is
up, on average. 0.2% one week later. Following the Death Cross, it is down,
on average, -1.92%, meaning that it underperformed its baseline return by
2.12%. So, here is the first interesting point: the Death Cross actually does
show a significant sell signal in the Russell 2000 one week later. However,
this effect decays; the key question here is how large is the effect, relative to
the variation for the period? A year out, we are seeing standard deviations
greater than 15.0%, against a very small positive effect of 1.93%. So, what we
can say from this test is that we do find a statistically significant, very short-
term sell signal in the Russell 2000 in the data examined. This sell signal
appears to be strong for a week, and then decays and we see no longer term
significance. Interesting.
Where do we go from here? Well, first, I’d flip the test and look at the so-
called Golden Cross, which is the inverse of the Death Cross, when the 50
period crosses over the 200 day moving average:
Here, we do not have any significant effects at all; this is not necessarily a
condemnation of the test (perhaps there is a reason the effect would not by
symmetrical), but it certainly calls for further study. What other questions
should we ask? It is entirely possible to find a valid signal just due to chance,
so we’d be wise to repeat this test with other assets and other timeframes. We
also might dig a bit deeper and look at each of the events, though we should
be careful of doing too much work like that because it is easy to “curve fit”
and select what we want to see. Still, actually looking into the data can help to
build a deeper understanding of how the market looks. A summary test is only
that: a very broad, rough, and blunt summary that may miss much significant
detail.
Another question that I find very interesting is “why do people focus so
much attention on mediocre or, in some cases, absolutely meaningless
technical tools?” One reason is probably due to cognitive bias. For instance,
one of the largest one-week selloffs following a Death Cross in the Russell
was in 2008; anyone who identified it then and remembers the strong selloff
is likely to have some emotions associated with that event. Furthermore, the
signals certainly can look convincing on a chart:
It would be easy to find a few charts like that, and “show” that these crosses
work very well, but this is simply a case of choosing good examples. As I’ve
written before, much of the discipline of traditional technical analysis is
visual, not quantitative, so technical analysts are prone to these types of
errors, even with the best of intentions. The only defense against these errors
is using the tools of statistics to take a proper look at the data and to consider
the effects in the cold, hard light of quantitative analysis. In the case of the
Death Cross, there does not appear to be any reason to focus on this event,
and it appears to have no long-term significance for the market.
This table is, perhaps, slightly harder to read, but it gives the same type of
test results in more depth, focusing on a shorter time period (20 trading days
after the event). I would suggest you focus on the second column in each box,
which shows the excess return ( X - X is the signal mean — baseline mean) and
the p= column, which shows the p-value for the test.
In this case, we see echoes of the “weird” negative return that we saw in the
DJIA test. (This is, more or less, an artifact. The “juice”, i.e., big returns, in
stocks appear to happen at the extremes. Tests that select events more likely
to be in the “middle” of the data (relative to high/low range), as this average
crossing test does, are likely to show a natural element of underperformance.)
Most importantly, there is no clear and strong effect here.
It’s easy to see that the 2nd and 3rd bars from the right of the chart may be
a pause in the breakdown that started in the high teens. Though this is a daily
chart, the price structure is a complex (two-legged) pullback on the weekly
timeframe, so this daily pause is essentially a tiny pullback within the context
of that big weekly structure.
What do you do with this information? Well, as always, the answer depends
on who you are as a trader and how you make trading decisions. Critically,
the action out of this pattern—whether it leads to clean breakdown
(confirmation of the trade) or not (possible contradiction) can give insight into
the character of the move and to the conviction behind the move.
Many traders will be attracted to the idea of using a little pattern like this to
finesse an entry into the weekly trade with tight stops. Conceptually, this is
possible, but it’s more difficult in practice. One of the biggest mistakes
discretionary traders make is using stops that are too tight, and then their
stops simply become targets in a noisy market. If you’re going to do this,
make sure you understand the tradeoffs between tight stops and probabilities.
One last point: this is an example of a pattern that is fairly “easy” for the
human trader to handle, but that is very difficult to quantify. You could write
code to describe a nested pullback, but that code would take a lot of tweaking
and refining before it worked well. (We’d have to define the initial move,
what setups are valid, how much momentum would set up the nested
pullback, the scope and location of the pullback, and we’d have to strike a
balance in all of this between precision and leaving a wide enough range that
we catch all the patterns we want—not an easy task!) This is a pattern that
puts the human skill of pattern recognition to work in a disciplined
framework, and points us toward a type of trading in which discretionary and
quantitative tools can work together.
I’ve written about this pattern before, and even have a section on my blog
dedicated to it. Why do I write about the same patterns over and over?
Because focusing on a defined subset of patterns can lead to great
understanding of complex and complicated markets. Because these patterns
work. Because they are important.
Chapter 6
You probably get the point that, unless you’ve studied probability and
statistics, your first answers to those questions are probably dramatically
wrong.
Because we don’t naturally understand randomness and variation, few
traders are prepared for the natural degree of randomness in their results., and,
yes, let’s use that dirty word: some traders get lucky and some get very
unlucky. It’s easy to show models where traders have radically different
results trading the same system with no errors (on different time periods).
Even with no mistakes, no emotions, no analytical prowess, no tweaking of
the system, results diverge, dramatically,
dramatically, due to luck.
So, that’s great, right? Some people get lucky in the market and some don’t.
If that’s all I have to say, maybe we should just go to Vegas and put it all on
black for the first roll, but hold on, there’s
there’s a better way.
Note the words used here: “specify”, “have a rule”, etc. This means that
you have rules, in advance, that say what you will and will not do in the
market.
That last point is the key to consistency. We do not have to remove all
human input from the trading process. In fact, we can’t. For instance, a purely
systematic trader follows a set of rules, but where did those rules come from?
In most cases, from some type of statistical or scientifically-informed
(hopefully) research process, but, even here, things are not absolute. Research
involves decisions, and those decisions are made, at least on some level, by
humans. Why some markets and not others? Why certain parameters? Why
was this particular approach to the data explored in the first place? Why did
we choose to look at certain data sets and exclude others?
Even more important, someone (or something) is probably monitoring our
systematic guy. If market conditions change in some way we don’t
understand, his rigid set of rules may no longer work, and, at some point, he
will get be pulled out of the game. Following that event, there’s a good
chance that humans (again, using some intuition and discretion) may look at
his rule set, refine it in the context of recent data, and might send him in to
play again. Even in this most rigid systematic approach, human discretion and
intuition dance in the margins. The key is that discretion can be structured and
disciplined, just as any other input to the trading process.
I will agree that exact adherence to bullet point rules is easier than
incorporating discretion into your trading. It takes a lot of experience to use
discretionary inputs in a way that is not unduly influenced by emotion. (Note:
removing all emotion from trading should not be a goal.) Being consistent and
being disciplined simply means we follow a set of rules with consistency, and
those rules certainly can include an element of discretion.
On Reversals
Take a look at this picture which I took a few years ago, on a Friday
afternoon, on a New York / New Jersey ferry. After a long and stressful work
week (it was 2008), the gentleman in the photo was more than a little
inebriated (i.e., could barely stand up), probably the victim of an early happy
hour. Now, you should also know that these ferries are fast, and the winds on
the river are strong—the wind is often strong enough to blow glasses off your
face. This poor soul had urgent business that was unable to wait for the trip
across the river, so he walked to the front of the ferry, unzipped, and relieved
himself over the bow—directly into what was probably a 35 knot headwind.
Though this happened a while ago, the lesson and the aftermath made a
lasting impression (probably more so on the people who did not see it coming
and did not step out of the spray). Though few of us might commit the
Technicolor version of this error, financial commentators do it all the time, in
other ways.
I have spent some time doing a lot of reading—everything from social
media, “big” media, gurus and pundits, and paid research. It is always
interesting to see the commonalities across the group (a less kind assessment
might be “groupthink”), but one error crops up repeatedly: attempts to catch
or call a trend turn with no justification. This error can be hazardous to your
financial health, so let me share a few thoughts.
Section 1: Introspection
Introspection
Introspection is not easy. We spend much of our lives lying to ourselves;
some of this is even constructive, but with can easily either over or
underemphasize weaknesses or strengths. Looking at other people, we easily
identify people who think too much of their abilities, or can probably also
think of someone who we think should have far more confidence. Having an
accurate and balanced assessment of our strengths, weaknesses, and potential
is not normal, but that is your goal in doing this work!
First, we can define the scope of the project a little more clearly. We want
to understand our:
There is no right or wrong way to do this, but here are some ideas that work
for other people. Feel free to adapt and expand them for your own practice
and exploration.
Sit down with a blank piece of paper and write stream of consciousness
lists of your strength and weaknesses. Spend time doing this, and come back
to it every day for a week. At the end of the week, edit and categorize the list.
Reflect on the list, and let it grow over another week or two. Make sure that
you bring your focused attention back to this list several times each day, even
if for only a few minutes.
Alternatively, make five lists (personal attributes, values, emotional
characteristics, habits, needs and desires) and work in filling
filli ng out those lists.
Though this is introspection, you may find value in talking to people who
know you well and getting feedback from others. It can be very helpful to
work with people from different settings (friends, coworkers, family
members), but use their perspectives to spark your own work and self-
reflection.
This exercise is an important part of knowing yourself. Once you have a
clearer picture of your motivations, strengths, and weaknesses, you will be
better equipped to figure out how to shape yourself into the person you want
to be.
Be honest—not brutal, not cruel—be fair and honest with yourself. The
goal of this exercise is to see clearly and accurately.
accurately.
Section 4: Readings
From The Art and Science of Technical Analysis: Market Structure, Price
ction, and Trading Strategies by Adam Grimes, Wiley, 2012:
346-374 (trader’s mind)
On Psychology
We could go on and on with these ideas, but these will get you started. The
key concept is that we want to shake things up just enough that we can
achieve a new perspective. Simple things work. Simple things are powerful.
Once you’ve taken a serious look at the information you consume, then
you’re in a position to figure out what do to about it. I suggest trying an
information diet: First, cut out virtually everything you read. If you are a news
unkie, this may feel uncomfortable. You may feel blind or cut off, but give it
a chance. Avoid clicking or reading for a week, and see how your emotional
state changes.
Then you can start adding things back, a little at a time, and monitoring
your emotional state while you do so. If you find yourself sucked into hours
of clicking and sharing emotional and negative articles, first of all
congratulate the blog owner on building an engaging blog, but then step away
and rethink. You can absorb information, but you need to be careful about the
emotion that goes with it.
On Discipline
Baseline Drift
When we look at long price histories of some assets, we observe a baseline
drift. For instance, if you were to randomly buy a large basket of stocks and
hold them over a significant time, you would likely make money: over any
extended holding period, stocks tend to go up. It would be easy to assume that
this kind of slight bias might be evidence of nonrandom price action, but it is
not. Mathematically, we could express the probability of a step up or down in
our original random walk equation like this:
Prob up = Prob down = 0.50
Inside Information
Strong form EMH says that all information, even secret inside information,
is priced, so it is not possible to make a profit trading off that information.
Obviously, in the case of the aforementioned drug stock, a group of people
knew the drug approval would be denied before the news was released to the
public. Those people had a chance to initiate short positions at $40 and could
have taken profits as the news hit the wires. For strong form EMH to be valid
there would have to be some mystical adjustment mechanism that moves
price to the correct level the instant the FDA committee votes to deny the
drug approval, before the information hits the newswires, even before the
company knows the news, and even before the news leaves the room. (We
should be suspicious of market explanations and models that require a deus ex
machina. Buying and selling move prices—only those and nothing else.) It is
also worth considering that regulatory agencies, at least in the United States,
spend an enormous amount of time, energy, and resources prosecuting traders
who trade on inside information. Why would this be so if there were no
money to be made on that information? Obviously, there are some serious
logical problems here, so strong form EMH has few adherents today, even in
academia.
Long-Term Mispricings: Booms, Bubbles, and Busts
Some forms of the EMH could allow for short-term mispricings due to
market microstructure issues, but there is also evidence that markets may stay
mispriced for extended periods of time. In an efficient market, such
mispricings would represent a profit opportunity for rational investors, and
the pool of educated, rational arbitrage money should be able to quickly
overcome any irrationality. This does not seem to be the case. Market history
has many examples of markets that reached irrational bubble valuations and
then deflated with astonishing speed. The financial crisis of 2007, crude oil at
$150 in 2008, Nasdaq stocks in the dot-com bubble, the collapse of Long-
Term Capital Management in 1998, the Asian financial crisis of 1997, and the
Hunt Brothers’ impact on the silver market in 1980 are recent examples, but
the South Sea Bubble of 1720 and the Dutch Tulip Mania of 1634 suggest that
this has been going on for a long time. Figure 10.4 shows a chart that could be
the chart of a high-flying tech stock from the dot-com bubble of 2000, but it
actually is the price history of shares of the South Sea Company from January
1719 to mid-1721. In fact, we even have evidence from ancient Babylon that
there were commodity bubbles in antiquity—it seems likely that, far from
being an aberration, bubbles and busts are natural features of markets. There
is nothing new under the sun. In these bubbles, valuations typically expand to
hundreds or thousands of percentages of fair, justifiable valuations over a
period of months or years. When the bubble pops, as it always does, there is
the inevitable bust as the market quickly recalibrates. It is difficult to
reconcile this behavior with the idea that investors always make rational
decisions. At the very least, it seems like something else is going on here.
Figure
Figure 10.4 Stock Price of the South
South Sea Company
Cost of Information
Semistrong form EMH states that no one can earn above-average profits
based on analysis of any publicly available information. Therefore, there is no
value in doing any fundamental analysis or, indeed, any analysis at all,
because all information is already reflected in the price. Again, we run into a
logical absurdity in the limit. If the theoretical construct were true, then no
one would do any analysis, so how would the information be reflected in
price? In fact, there would be no motivation for anyone to ever do any
analysis of any information, and, eventually, no reason to ever trade. Markets
would collapse and cease to exist (Grossman and Stiglitz 1980). However, in
the real world, market-related information is quite expensive. Institutions,
traders, and analysts pay a lot for information, and they pay even more to get
it quickly.
quickly. Why would they do that, and continue to do so decade after decade,
if this information were worthless?
Autocorrelation
Autocorrelation in Returns
In a random walk, the probability of the next step being up is usually very
close to 50 percent (plus or minus any drift component), and that probability
does not change based on anything that has happened in the past. In other
words, if the probability of a step up is 50 percent, after three steps up the
probability of the fourth being up is still 50 percent, exactly as it would be for
flips of a fair coin. Under the assumptions of random walks, price has no
memory and each step is completely independent of previous steps.
Autocorrelation is the statistical term that basically deals with trends in data.
If a series of price changes displays high autocorrelation, a step up is more
likely to follow a previous step up. Though this is a simplification that is not
always true, prices tend to trend in an autocorrelated price series.
Many academic studies have found no autocorrelation in returns, but the
evidence is mixed, with other studies finding strong autocorrelations. The
question seems to depend heavily on the sample being examined, as different
time frames or time periods of the same market may exhibit radically different
qualities. It is worth mentioning that this is consistent with what I have come
to expect as an experienced trader in a wide range of markets and time
frames. One of the major themes of this book is that all market action exists in
one of two broad states: range expansion or mean reversion, or, to use terms
that might be more familiar, trends and trading ranges. In a trending
environment, over any given time period, the next price change is more likely
to be upward if the preceding price change was also upward, or downward if
the preceding change was downward. In a trading range environment,
negative autocorrelation is more likely—the following price change is more
likely to reverse the previous price change. If these two environments are
aggregated, the end result is that the autocorrelations essentially
counterbalance each other, and the overall sample shows near-zero
autocorrelation.
I would suggest that one of the potential problems with the academic
studies regarding autocorrelation is that they begin with the assumption that
prices move randomly, so perhaps not enough effort has been made to find
patterns that could differentiate, a priori, between these discrete regimes. Note
that some writers supporting technical analysis have made the suggestion that
trending and trading range environments should be analyzed separately. This
is a mental exercise on par with dividing a set of random numbers into those
greater and less than zero, and then being surprised that the former set has a
higher mean than the latter! For the analysis to be useful, it must be possible
to define a set of conditions that will have some predictive value for
autocorrelations over a finite time horizon. This is actually one of the core
tasks of competent discretionary trading—to identify the most likely
emerging volatility environment. One of the major focuses of this book is to
identify conditions and factors that will help a discretionary trader make just
such a distinction.
Volatility Clustering
Real market prices show at least one very serious departure from random
walk conditions. A random walk has no memory of what has happened in the
past, and future steps are completely independent of past steps. However, we
observe something very different in the actual data—large price changes are
much more likely to be followed by more large changes, and small changes
are more likely to follow small changes. For practical purposes, what is
probably happening is that markets respond to new information with large
price movements, and these high-volatility environments tend to last for a
while after the initial shock. This is referred to in the literature as the
persistence of volatility shocks and gives rise to the phenomenon of volatility
clustering. Figure 10.5 shows the absolute value of the standard deviations of
daily changes for several years of daily returns in the S&P 500 Cash index.
Note that the circled areas, which highlight clusters of high-volatility days,
are not dispersed through the data set randomly—they tend to cluster in
specific spots and time periods.
Figure
Figure 10.5 Absolute Values
Values of Standard
Standard Deviation of Returns, S&P 500
Index, Mid-2008 through
through 2010 (Values
(Values < 2.0 Filtered
Filtered from
from This Chart)
What we see here is autocorrelation of volatility. Price changes themselves
may still be random and unpredictable, but we can make some predictions
about the magnitude (absolute value) of the next price change based on recent
changes. Though this type of price action is a severe violation of random walk
models (which, by definition, have no memory of previous steps), do not
assume that it is an opportunity for easy profits. We have seen practical
implications of an autocorrelated volatility environment elsewhere in this
book—for instance, in the tendency for large directional moves to follow
other large price movement—but it is worth mentioning here that there are
academic models that capture this element of market behavior quite well.
Autoregressive conditional heteroskedasticity (ARCH), generalized ARCH
(GARCH), and exponential GARCH (EGARCH) are time series models that
allow us to deal with the issue of varying levels of volatility across different
time periods. A simple random walk model has no memory of the past, but
ARCH-family models are aware of recent volatility conditions. Though not
strictly correct, a good way to think of these models is that they model price
paths that are a combination of a random walk with another component added
in. This other component is a series of error components (also called
residuals) that are themselves randomly generated, but with a process that sets
the volatility of the residuals based on recent history. The assumption is that
information comes to the market in a random fashion with unpredictable
timing, and that these information shocks decay with time. The effect is not
unlike throwing a large stone in a pond and watching the waves slowly decay
in size. If this topic interests you, Campell, Lo, and MacKinlay (1996) and
Tsay (2005) are standard references.
Behavioral Perspectives
Strictly speaking, it is not possible to disprove either the EMH or the RWH,
but sufficient empirical evidence has accumulated to challenge some of the
core beliefs. Strong form EMH, in particular, exists today more as a vestigial
intellectual curiosity than as a serious theory. In addition, the price
adjustments under other forms of the EMH cannot occur through some
magical, unexplained mechanism; they must be the result of buying and
selling decisions made by traders. There will necessarily be temporary and
intermediate mispricings as prices move from the preevent to the postevent
levels, and there may be some degree of emotion (irrationality) along the way.
New theories have evolved in an effort to better explain the conditions we
observe in the real world, and many of those focus on investors’ behavior.
Reflexivity
Reflexivity is a term coined by George Soros to describe the process where
asset prices have an impact on fundamentals and fair valuations. Most
theories suggest that prices are a reflection of actual value, but Soros (1994)
suggests that the causative link runs both ways. In
I n his own words:
The generally accepted theory is that financial markets tend towards
equilibrium, and on the whole, discount the future correctly. I
operate using a different theory, according to which financial
markets cannot possibly discount the future correctly because they
do not merely discount the future; they help to shape it [emphasis
mine]. In certain circumstances, financial markets can affect the so-
called fundamentals which they are supposed to reflect. When that
happens, markets enter into a state of dynamic disequilibrium and
behave quite differently from what would be considered normal by
the theory of efficient markets. Such boom/bust sequences do not
arise very often, but when they do, they can be very disruptive,
exactly because they affect the fundamentals of the economy.
Essentially, Soros’s argument is that the traditional causative model in
which asset prices correctly reflect fundamentals is flawed because it assumes
a one-way link between fundamentals and prices. There are times when
markets become so irrational and so mispriced that market participants’ biases
biases
and impressions actually change the fundamentals that are supposed to be
driving valuations.
The academic world has taken little note of this theory, but it does seem to
describe many elements that we observe in boom-and-bust cycles. Perhaps the
clearest example of this concept would be to imagine a wildly changing
currency rate. Exchange rates are supposed to reflect fundamentals, but it is
not terribly difficult to imagine an unbalanced situation where the rates
actually become so extreme that they have an effect on the fundamentals of
each country’s economy. Similarly, how hard is it to imagine an extreme price
for a commodity affecting fundamentals of production, transportation, or
marketing for that commodity? Do farmers
f armers not make planting decisions based
on grain prices? This certainly is not the normal mode in which markets
operate, but these types of extremes occur several times each decade. Even if
we do not wish to wholeheartedly embrace the theory of reflexivity, it is
worthwhile to spend some time considering the concept that prices may
influence fundamentals, that the observer may affect the outcome of the
experiment. Perhaps far more complex relationships exist in financial markets
than classic economic models suggest.
Summary
Much modern academic research suggests that markets are efficient,
meaning that new information is immediately reflected in asset prices and that
price movements are almost completely random. If this were true, then it
would not be possible to make consistent risk-adjusted returns in excess of the
baseline drift in any market. However, many studies and much empirical
evidence exists that challenge the EMH, and new directions in academic
thinking leave open the possibility that some skilled traders may be able to
profit in some markets. Trading is not easy, for markets are very close to
being efficient and price movements have a large random component.
However, with skilled analysis and perfect discipline, traders can limit their
involvement to those rare points where an imbalance in the market presents a
profitable trading opportunity. The final two chapters of this book look at
some tools for that analysis and lay out some specific market patterns that
offer verifiable edges to short-term traders.
Chapter 11
M
any years ago I was struggling with trying to adapt to a new
market and a new time frame. I had opened a brokerage account
with a friend of mine who was a former floor trader on the
Chicago Board of Trade (CBOT), and we spent many hours on the phone
discussing philosophy, life, and markets. Doug once said to me, “You know
what your problem is? The way you’re trying to trade … markets don’t move
like that.” I said yes, I could see how that could be a problem, and then I
asked the critical question: “How do markets move?” After a few seconds’
reflection, he replied, “I guess I don’t know, but not like that”—and that was
that. I returned to this question often over the next decade—in many ways it
shaped my entire thought process and approach to trading. Everything became
part of the quest to answer the all-important question: how do markets really
move?
Many old-school traders have a deep-seated distrust of quantitative
techniques, probably feeling that numbers and analysis can never replace
hard-won intuition. There is certainly some truth to that, but one of the major
themes of this book is how traders can use a rigorous statistical framework to
support the growth of real market intuition. Quantitative techniques allow us
to peer deeply into the data and to see relationships and factors that might
otherwise escape our notice. These are powerful techniques that can give us
profound insight into the inner workings of markets. However, in the wrong
hands or with the wrong application, they may do more harm than good.
Statistical techniques can give misleading answers, and sometimes can create
apparent relationships where none exist. I will try to point out some of the
limitations and pitfalls of these tools as we go along, but do not accept
anything at face value.
For our purposes, we can treat the two more or less interchangeably. In
most of our work, the returns we deal with are very small; %age and log
returns are very close for small values, but become significantly different as
the sizes of the returns increase. For instance, a 1 % simple return = 0.995 %
log return, but a 10 % simple return = 40.5 % continuously compounded
return. Academic work tends to favor log returns because they have some
mathematical qualities that make them a bit easier to work with in many
contexts, but, for the sake of clarity and familiarity, I favor simple %ages in
this book. It is also worth mentioning that %ages are not additive. In other
words, a $10 loss followed by a $10 gain is a net zero change, but a 10 % loss
followed by a 10 % gain is still a net loss. (However, logarithmic returns are
additive, which is one reason why researchers prefer to use them over simple
%age returns.)
Historical Volatility
Historical volatility (which may also be called either statistical or realized
volatility) is a good alternative for most assets, and has the added advantage
that it is a measure that may be useful for options traders. Historical volatility
(Hvol) is an important calculation. For daily returns:
where p = price, t = this time period, t – 1 = previous time period, and the
standard deviation is calculated over a specific window of time.
Annualization factor is the square root of the ratio of the time period being
measured to a year. The equation above was specifically for daily data and
there are 252 trading days in a year, so the correct annualization factor is the
square root of 252. For weekly and month data, the annualization factors are
the square roots of 52 and 12, respectively.
For instance, using a 20-period standard deviation will give a 20-bar Hvol.
Conceptually, historical volatility is an annualized one standard deviation
move for the asset based on the current volatility. If an asset is trading with a
25 % historical volatility, we could expect to see prices within +/–25 % of
today’s price one year from now, if returns follow a zero-mean normal
distribution (which they do not.)
Probability Distributions
Information on virtually any subject can be collected and quantified in a
numerical format, but one of the major challenges is how to present that
information in a meaningful and easy-to-comprehend format. There is always
an unavoidable trade-off: any summarization will lose details that may be
important, but it becomes easier to gain a sense of the data set as a whole. The
challenge is to strike a balance between preserving an appropriate level of
detail while creating a useful summary. Imagine, for instance, collecting the
ages of every person in a large city. If we simply printed the list out and
loaded it onto a tractor trailer, it would be difficult to say anything much more
meaningful than “That’s a lot of numbers you have there.” It is the job of
descriptive statistics to say things about groups of numbers that give us some
more insight. To do this successfully, we must organize and strip the data set
down to its important elements.
One very useful tool is the histogram chart. To create a histogram, we take
the raw data and sort it into categories (bins) that are evenly spaced
throughout the data set. The more bins we use, the finer the resolution, but the
choice of how many bins to use usually depends on what we are trying to
illustrate. Figure 11.1 shows histograms for the daily % changes of LULU, a
volatile momentum stock, and SPY, the S&P 500 index. As traders, one of the
key things we are interested in is the number of events near the edges of the
distribution, in the tails, because they represent both exceptional opportunities
and risks. The histogram charts show that that the distribution for LULU has a
much wider spread, with many more days showing large upward and
downward price changes than SPY. To a trader, this suggests that LULU
might be much more volatile, a much crazier stock to trade.
Figure 11.1 Return Distributions for LULU and SPY, June 1, 2009, to
December 31, 2010
Figure 11.5 Running Mean After 2,500 Apples Have Been Picked Up
(Y-Axis Truncated to Show Only Values Near 4.5)
With apples, the problem seems trivial, but in application to market data
there are some thorny issues to consider. One critical question that needs to be
considered first is so simple it is often overlooked: what is the population and
what is the sample? When we have a long data history on an asset (consider
the Dow Jones Industrial Average, which began its history in 1896, or some
commodities for which we have spotty price data going back to the 1400s),
we might assume that that full history represents the population, but I think
this is a mistake. It is probably more correct to assume that the population is
the set of all possible returns, both observed and as yet unobserved, for that
specific market. The population is everything that has happened, everything
that will happen, and also everything that could happen—a daunting concept.
All market history—in fact, all market history that will ever be in the future—
is only a sample of that much larger population. The question, for risk
managers and traders alike, is: what does that unobservable population look
like?
In the simple apple problem, we assumed the weights of apples would
follow the normal bell curve distribution, but the real world is not always so
polite. There are other possible distributions, and some of them contain nasty
surprises. For instance, there are families of distributions that have such
strange characteristics that the distribution actually has no mean value.
Though this might seem counterintuitive and you might ask the question
“How can there be no average?” consider the admittedly silly case earlier that
included the 3,000,000-year-old mummy. How useful was the mean in
describing that data set? Extend that concept to consider what would happen
if there were a large number of ages that could be infinitely large or small in
the set? The mean would move constantly in response to these very large and
small values, and would be an essentially useless concept.
The Cauchy family of distributions is a set of probability distributions that
have such extreme outliers that the mean for the distribution is undefined, and
the variance is infinite. If this is the way the financial world works, if these
types of distributions really describe the population of all possible price
changes, then, as one of my colleagues who is a risk manager so eloquently
put it, “we’re all screwed in the long run.” If the apples were actually Cauchy-
distributed (obviously not a possibility in the physical world of apples, but
play along for a minute), then the running mean of a sample of 100 apples
might look like Figure 11.6.
Figure 11.7 Running Mean for 10,000 Random Numbers Drawn from a
Cauchy Distribution
Ouch—maybe we should have stopped at 100. As the sample gets larger,
we pick up more very large events from the tails of the distribution, and it
starts to become obvious that we have no idea what the actual, underlying
average might be. (Remember, there actually is no mean for this distribution.)
Here, at a sample size of 10,000, it looks like the average will simply never
settle down—it is always in danger of being influenced by another very large
outlier at any point in the future. As a final word on this subject, Cauchy
distributions have undefined means, but the median is defined. In this case,
the median of the distribution was 4.5—Figure 11.8 shows what would have
happened had we tried to find the median instead of the mean. Now maybe
the reason we look at both means and medians in market data is a little
clearer.
Regression
With this background, we now have the knowledge needed to understand
regression. Here is an example of a question that might be explored through
regression: Barrick Gold Corporation (NYSE: ABX) is a company that
explores for, mines, produces, and sells gold. A trader might be interested in
knowing if, and how much, the price of physical gold influences the price of
this stock. Upon further reflection, the trader might also be interested in
knowing what, if any, influence the U.S. Dollar Index and the overall stock
market (using the S&P 500 index again as a proxy for the entire market) have
on ABX. We collect weekly prices from January 2, 2009, to December 31,
2010, and, just like in the earlier example, create a return series for each asset.
It is always a good idea to start any analysis by examining summary statistics
for each series. (See Table 11.4.)
At a glance, we can see that ABX, the S&P 500 (SPX), and Gold all have
nearly the same mean return. ABX is considerably more volatile, having at
least one instance where it lost 20 % of its value in a single week. Any data
series with this much variation, measured by a comparison of the standard
deviation to the mean return, has a lot of noise. It is important to notice this,
because this noise may hinder the usefulness of any analysis.
A good next step would be to create scatterplots of each of these inputs
against ABX, or perhaps a matrix of all possible scatterplots as in Figure
11.11. The question to ask is which, if any, of these relationships looks like it
might be hiding a straight line inside it; which lines suggest a linear
relationship? There are several potentially interesting relationships in this
table: the Gold/ABX box actually appears to be a very clean fit to a straight
line, but the ABX/SPX box also suggests some slight hint of an upward-
sloping line. Though it is difficult to say with certainty, the USD boxes seem
to suggest slightly downward-sloping lines, while the SPX/Gold box appears
to be a cloud with no clear relationship. Based on this initial analysis, it seems
likely that we will find the strongest relationships between ABX and Gold
and between ABX and the S&P. We also should check the ABX and U.S.
Dollar relationship, though there does not seem to be as clear an influence
there.
Figure 11.11 Scatterplot Matrix (Weekly Returns, 2009–2010)
Regression actually works by taking a scatterplot and drawing a best-fit line
through it. You do not need to worry about the details of the mathematical
process; no one does this by hand, because it could take weeks to months to
do a single large regression that a computer could do in a fraction of a second.
Conceptually, think of it like this: a line is drawn on the graph through the
middle of the cloud of points, and then the distance from each point to the line
is measured. (Remember the ε’s that we generated in Figure 11.11? This is the
reverse of that process: we draw a line through preexisting points and then
measure the ε’s (often called the errors).) These measurements are squared, by
essentially the same logic that leads us to square the errors in the standard
deviation formula, and then the sum of all the squared errors is calculated.
Another line is drawn on the chart, and the measuring and squaring processes
are repeated. (This is not precisely correct. Some types of regression are done
by a trial-and-error process, but the particular type described here has a
closed-form solution that does not require an iterative process.) The line that
minimizes the sum of the squared errors is kept as the best fit, which is why
this method is also called a least-squares model. Figure 11.12 shows this best-
fit line on a scatterplot of ABX versus Gold.
Figure 11.12 Best-Fit Line on Scatterplot of ABX (Y-Axis) and Gold
We now look at a simplified regression output, focusing on the three most
important elements. The first is the slope of the regression line (m), which
explains the strength and direction of the influence. If this number is greater
than zero, then the dependent variable increases with increasing values of the
independent variable. If it is negative, the reverse is true. Second, the
regression also reports a p-value for this slope, which is important. We should
approach any analysis expecting to find no relationship; in the case of a best-
fit line, a line that shows no relationship would be flat because the dependent
variable on the y-axis would neither increase nor decrease as we move along
the values of the independent variable on the x-axis. Though we have a slope
for the regression line, there is usually also a lot of random variation around
it, and the apparent slope could simply be due to random chance. The p-value
quantifies that chance, essentially saying what the probability of seeing this
slope would be if there were actually no relationship between the two
variables.
The third important measure is R2 (or R-squared), which is a measure of
how much of the variation in the dependent variable is explained by the
independent variable. Another way to think about R 2 is that it measures how
well the line fits the scatterplot of points, or how well the regression analysis
fits the data. In financial data, it is common to see R 2 values well below 0.20
(20 %), but even a model that explains only a small part of the variation could
elucidate an important relationship. A simple linear regression assumes that
the independent variable is the only factor, other than random noise, that
influences the values of the independent variable—this is an unrealistic
lets us see if the tendencies hold for different markets, different timeframes,
and whether they have been stable over time. Proper selection of the test
universe is the first step in this process.
From a quantitative perspective, one of the main questions about markets is
whether asset returns are stationary. This is a slight oversimplification, but,
mathematically, a stationary process is a stochastic (random) process whose
probability distribution does not change over time. In market data, if returns
were stationary, we would not see a shift in such things as measures of central
tendency or dispersion when the same market is examined over different time
periods. There is a significant debate in academia over whether returns are
stationary, but, from a trader’s perspective, they do not seem to be. For
instance, consider the stock of a company that in its early days was a minor
player but later rose to be the industry leader, or a commodity that might at
one time have been primarily produced and consumed domestically, but for
which a large global marketplace developed as domestic supplies were
depleted. As these large, structural changes occur we know that the assets
will, in fact, trade very differently. (Note that there are also processes that are
cyclostationary, meaning that they appear to be stationary except for having a
fairly predictable cyclic or seasonal component. If something is predictable, it
can be backed out of the data, and the cyclostationary data can be transformed
to a stationary set. Examples of real market data that have a strong seasonal
component might be retail sales and volatility of natural gas prices.)
Non-stationarity of asset returns is a challenge to the enduring validity of
any technical methodology; any trading edge should be examined over a long
enough time period to assess its stability. Our results will vary greatly
depending on the time period examined and whether it was a rip-roaring bull
market, a flat period with no volatility, or a bear market. Traders doing
analysis must consider this factor. Too many tests are presented to the public
that were run on the last year or two years of data. These tests may be
valuable if the future looks like these recent years, but that is a tenuous
assumption on which to build a trading program.
Assets and asset classes are also correlated to varying degrees. In times of
financial stress, it is not uncommon to see everything trading in the same
direction, and to see, for instance, coffee, cocoa, crude oil, stocks, and even
gold futures being crushed on the same day. One of the assumptions of many
standard statistical methods is that events are independent, but if we examine
a set of 1,000 stocks, we may find that more than 800 of them have large
moves in the same direction on the same day. Of course, this violates any
reasonable definition of independence, so we need to be aware that large tests
on many different assets may be considerably less powerful (in terms of the
information they give us) than a comparable test in another field. In addition
to challenging assumptions of independence, tight correlations can effectively
reduce the same size for many tests. For instance the equities sample for these
tests includes over 1,380,000 trading days, but, because of correlations, these
are not 1,380,000 independent events.
For the tests in this book, I have attempted to address some of these issues
in three ways: by using a diverse sample of assets and asset classes, by
focusing on a reasonably large historical period that includes several volatility
regimes, and through a methodology that adjusts for the baseline drift
inherent in each data set. In terms of asset class selection, I have taken a
basket of 600 individual equities, randomly sampled from three large market
capitalization tranches, 16 different futures markets, and 6 major world
currencies trading against the U.S. dollar. All tests were also run on a set of
stochastic (random) price models for calibration: a random walk i.i.d. ~
normal (the notiation “i.i.d. ~” means “independent and identically distributed
according to the … distribution”), a random walk i.i.d. ~ actual returns from
the Dow Jones Industrial Average of 1980 to 2010, and a GARCH model.
All tests in this chapter are run on the time period from 1/1/2001 to
12/31/2010. This period was chosen partially out of convenience, and
partially because of the assumption that the most recent data are likely to be
more relevant going forward. (There could be less value in examining results
from, say, 1940 to 1950.) In addition, this period includes, for equities, a
period of protracted low volatility in the first part of the decade, the dramatic
and volatile bear market associated with the 2007 to 2008 financial crisis, and
the sharp recovery bull market of 2009 to 2010. One could, quite correctly,
make the argument that this period does not include a large-scale secular bull
market, which is likely to be the focus of many equities traders, but it is
possible to replicate these tests on other time periods to check for consistency.
It is also worth considering that these 10 years include some dramatic
transitions for nearly all of these assets, many of which underwent a change
from open-outcry to electronic markets. For others, such as U.S. equities, the
electronic marketplaces matured and evolved as they underwent significant
regulatory and structural changes.
Equities
For the equities universe, the entire set of domestic equities (N = 7,895)
was ranked by market capitalization as of 1/1/2011, and break points were set
at the 500th, 1,500th, and 2,500th members to approximate the divisions into
large-cap, mid-cap, and small-cap universes. Note that though there are nearly
8,000 tickers listed, only a few thousand of them trade actively; many of the
very small market capitalization stocks are extremely illiquid, sometimes
going days or weeks without so much as a single trade. Exchange-traded fund
(ETF) products and stocks with very short trading histories, and then 200
stocks from each market cap group were randomly chosen as the sample for
that market cap. Table 12.1 presents industry statistics by each group.
Basic Materials 7 15 15
Capital Goods 13 18 7
Consumer Cyclical 6 8 8
Consumer Noncyclical 15 9 7
Energy 15 19 8
Financial 33 31 44
Health Care 21 9 14
Services 50 41 46
Technology 17 21 28
Transportation 5 6 7
Utilities 17 23 16
All StocksLarge-CapMid-CapSmall-Cap
For our tests in this book, this is not a fatal flaw, but it is something that
needs to be considered carefully for many other kinds of tests. If we were
really interested in different trading characteristics by market caps, one way to
compensate for survivorship bias would be to create a dynamic universe. For
instance, it could be resampled, say, every six months and the membership
based on current market cap rankings at each resampling point. On 1/1/2006
you could segregate the stock universe into market caps based on 1/1/2006
rankings, take your samples, and do your analyses over the next six months.
You would then need a list of market caps as they stood on 7/1/2006, take
your sample, and repeat. This is a laborious process, but it may be the only
way to correct for this bias in some kinds of studies. Be clear that these
market cap tranches were done on a look-back basis for the purpose of the
studies in this book; the goal of these tests was to understand the behavior of
equities as a whole rather than focusing too much attention on market cap
division, so this was not a significant limitation for these tests.
Another factor to consider is that the test universe was created from stocks
trading as of 12/31/2010, so companies like Enron, Bear Stearns, and
Lehmann Brothers are not included, nor are stocks that did exceptionally well
and were taken over, sometimes at significant premiums to their market
values. This introduces an element of survivorship bias that could distort or
alter our results. Particularly for traders developing rule-based trading
systems, the impact of these events can be very significant, even to the point
of making unprofitable systems appear to be very good in backtests. For
rough tests of price tendencies like the ones on this chapter, this factor may be
less significant, but the potential impact of survivorship bias cannot be
quickly and casually dismissed
There are several ways in which this research could be extended. For one
thing a number of companies in the sample are American depositary receipts
(ADRs), which are foreign companies that also list on U.S. exchanges. For
most of these companies, U.S. trading hours are not the primary trading
session and there may be significant differences between the way these
companies trade and domestically domiciled companies. It also may make
sense for interested traders to drill down a bit deeper into segregating stocks
by sectors, because biotech companies, for instance, trade very differently
from utilities or technology companies. There may be some important
quantitative differences that these tests have not captured, because they did
not segregate the equity groups by these or other factors.
Last, many traders access markets through a variety of indexes and ETF
products. We should not assume that the behavior of a large number of stocks
in aggregate will mirror the behavior of an index; traders who choose to focus
on index products should probably conduct similar research on the indexes
themselves. However, research done exclusively on indexes is subject to
several deficiencies: limited data history, changing index composition, and
potential microstructure issues such as nonsynchronous trading are a few of
the most common. Even traders who choose to focus on these products will
find their work significantly enhanced by a deeper understanding of the
behavior of the individual components that make up the index.
Futures
Chapter 14
Moving Averages
Statistics are used much like a drunk uses a lamppost; for support, not
illumination.
-Vin Scully
Traders use moving averages for many purposes; we have seen a few ideas
for specific applications earlier in this book, but it is worthwhile to now take a
deep look at price action around moving averages. It is always surprising to
me how many of the same concepts are repeated by traders, in the literature,
and in the media without ever being tested: moving averages are support and
resistance, moving averages define the trend, moving average crosses predict
future price movements, and so forth. In devising quantitative tests of moving
averages, we immediately run a problem: many traders use them in ways that
are not amenable to testing. They may be part of a more complex
methodology incorporation many discretionary factors, but there may also be
some sloppy thinking involved in many cases. For instance, traders will argue
that moving averages sometimes work as support and resistance, or that a
moving average crossover is sometimes a good definition of trend. It is
always possible to find an example of something working, and traders often
tend to focus undue attention on a few outstanding examples. However, the
real question is whether it is repeatable and reliable over a large sample size,
and whether the tool adds predictive value to the trading process.
If you flip a coin whenever you have a trading decision you will find that
the coin sometimes does seem to help you. It is also likely that, at the end of
many trades, you would be able to remember a few dramatic examples where
the coin made you a lot of money; perhaps it got you out of a losing trade
early on, or kept you in a winner for the big trend. In fact, you might conclude
that the coin doesn’t work all the time, but nothing in trading works all the
time. Perhaps you realize that it does help in certain cases so you are going to
keep it and just use it sometimes. In fact, you might say, as many traders do of
their indicators, that you “have to know when to look at it.” Obviously, I am
making a bit of a joke here, but the point is important. You have to be sure
that the tools you are using are better than a coin flip, that they are better than
random chance.
The Tests
For the tests that follow, I make no attempt to test complete trading systems
or complex methods. Instead, I am simply looking to see whether there is
discernible, nonrandom price movement around moving averages. Does it
make sense to make buying or selling decisions based on the relationship of
price to a moving average? Do moving averages provide support? Are some
moving averages more useful (special?) than others? Can moving averages
define the trend of a market? Consider these to be fundamental tests of the
building blocks and basic principles. There could well be modifying factors,
not addressed in these simple tests, that do show an edge when combined with
moving averages, or there could be ways to trade them profitably that are
outside the scope of these tests. For now, let’s start at the beginning and
consider what happens when price touches a simple moving average.
The rows in the table are for days following the signal event.
Days one to five are presented, and the table then switches to a
weekly format. This allows assessment of short-term tendencies
while still maintaining a longer-term perspective.
Excess mean returns (labeled µ sig – µ b), may be the most useful
summary statistic for these tests. To generate this number, the
mean return for the baseline of the sample universe (e.g., the
drift component of stock returns) is subtracted from the mean of
all signal events.
Diff median is the excess median, or the baseline median return
subtracted from the signal median return.
Because these may be very small numbers, they are displayed in
basis points (one basis point equals one-hundredth of a percent:
0.0001 = 0.01% = 1 bp) rather than percentages.
Asterisks indicate statistical significance at the 0.05 and 0.01
levels.
The percentage of days that close higher than the entry price is
does not matter since we do not intend to trade these ideas systematically.
They are useful only as a hint or a guidepost to suggest where trading
opportunities might lie, and to reveal some otherwise hidden elements of
market structure. You could also challenge these results on the basis that they
might not be representative of the whole market. I would disagree because a
large sample of stocks from all market caps and industries is represented,
along with most major futures and currencies, but it is a valid criticism to
consider. Last, this is not a standard and accepted test for random walk, even
though the results strongly suggest a nonrandom process is at work in this
sample.
cademic Research
There is a significant body of academic research investigating the
profitability of trading rules based on moving average crossovers. Brock,
Lakonishok, and LeBaron’s landmark paper, “Simple Technical Trading Rules
and the Stochastic Properties of Stock Returns” (1992) is noteworthy because
it was one of the first papers to show evidence that technical trading rules
could produce statistically significant profits when applied to stock market
averages. They used a set of rules based on moving average crossovers and
channel breakouts on the Dow Jones Industrial Average from its first record
day in 1897 to the last trading day of 1986. The results, as they say in the
paper, are striking: they find statistically significant profits on both the long
and short side for every moving average combination they examined.
For traders familiar with moving average studies in modern trading
applications, the choices of moving averages that Brock et al. chose may be
surprising: for the short average 1-, 2-, or 5-period, and 50-, 150-, or 200-
period for the longer. A 1-period moving average is not actually a moving
average at all—it is simply the price of the asset, so many of their moving
average crossovers were actually tests of price crossing a moving average.
Most authors and system developers tend to use averages much closer in
length, like 10/50, 50/200. The original 1992 paper is fairly accessible to the
lay reader and should be required reading for every trader who would trade
based on technical signals. However,
However, it might also be instructive to investigate
their results in terms that traders will more readily understand.
The most profitable signals in their study were not the stop and reverse
versions, but rules that entered on a moving average cross and exited 10 days
later. In addition, some of their tests added a band around the moving
averages and did not take signals within that band, in an attempt to reduce
noise and whipsaw signals. For the sake of simplicity,
simplicity, let’s look at their 1-/50-
period moving average crossovers with no filter channel. Figure 14.11 shows
the equity curve calculated for every day from 1920 to 1986, assuming the
trader invested $100,000 on each trading signal, or sold short an equivalent
dollar amount for the short signals.
Figure
Figure 14.11
14.11 Daily Equity Curve for 1/50 Moving Average Crossover
Crossover on
DJIA, 1920–1986 ($100,000
($100,000 Invested on Each
Each Signal)
These results do appear to be remarkable at first glance: a steadily
ascending equity curve that weathered the 1929 crash, both World Wars, and
several recessions with no significant drawdowns. In addition, this system
was stable for most of a century, while the economy, the sociopolitical
landscape, and the markets themselves underwent a number of dramatic
transformations. This stability is exactly what systematic traders are looking
for. Table 14.8 shows a few summary stats for this system, assuming no
transaction costs or financing expenses.
Table 14.8 Summary Stats for 1/50 Moving Average Crossover on DJIA,
1920–1986 (Total
(Total Net Profit Assumes $100,000 Invested on Each Signal)
N= 911
% Profitable 28.0%
Mean Trade 84 bp
These numbers are not bad, though traders not used to seeing long-term
trend-following systems might wish for a higher win percentage. It is not
uncommon for these types of systems to have win rates well under 40 percent,
and, as long as the winners are substantially larger than the losers, such a
system can be net profitable. In this case, we have to wonder if the average
trade size is large enough to be profitable after accounting for trading
frictions. It is important to remember that this is a backtest on the cash Dow
Jones Industrial Average (DJIA), which is the average of a basket of 30
stocks. Today, investors can access this market through a variety of derivative
products, but, for most of the history of this backtest, it would have been
necessary to have purchased each of the stocks in the average individually
and to have rebalanced the basket as needed to match changes in the average.
In addition, financing costs, the impact of dividends, and tax factors also need
to be considered with a system like this. Considering financing costs alone, if
an investor were able to earn 4% risk-free for 66 years (the actual rate would
vary, but this is probably in the ballpark. See Dimson, Marsh, and Staunton
2002), the initial $100,000 investment would have grown to over $1.3 million
over the same time period. Also, the actual price of the DJIA itself increased
2,403% over this period—a simple buy-and-hold strategy would have
returned over $2.3 million, albeit with impressive volatility and drawdowns
along the way.
Stronger condemnation, though, comes from an out-of-sample test. Since
the published results end in 1986, this is an ideal situation to walk forward
from 1/1/1987 to 12/31/2010, which effectively shows the results investors
might have achieved had they started trading this strategy after the end of the
test period. (This is not a true out-of-sample test, as it is likely that Brock et
al. examined some of the later history in their tests, even if the results were
not published.) Figure 14.12 and Table 14.9 show the results for this time
period, which are disappointing to say the least. It is also worth noting that a
simple buy-and-hold strategy would have returned over $550,000 over the
same time period.
Figure
Figure 14.12 Daily Equity Curve for
for 1/50 Moving Average
Average Crossover
Crossover on
DJIA, 1987–2010, Quasi
Quasi Out of Sample ($100,000 Invested
Invested on Each Signal)
Table 14.9 Summary Stats for 1/50 Moving Average Crossover on DJIA,
1987–2010, Quasi Out of Sample (Total Net Profit Assumes $100,000
Invested on Each Signal)
N= 449
% Profitable 20.9%
What is happening here? The first question we should ask is: is it possible
that this is simply normal variation for this system? Just by looking at the
equity curves, this seems unlikely because we are not able to identify any
other 15-year period when the curve is flat and volatile, but this is not an
actual test. Comparing the in- and out-of-sample returns, the Kolmogorov-
Smirnov test, which is a nonparametric test for whether or not two samples
were likely to have come from the same distribution, gives a p-value of 0.007.
Based on this result, we can say that we find sufficient evidence to reject the
idea that the out-of-sample test is drawn from a similar distribution as the in-
sample test; this suggests that something has changed. It might be unwise to
trade this system in the future after such a significant shift in the return
distribution. This is not to say that anything was wrong with the research or
the system design; market history is littered with specific trading ideas and
systems that eventually stopped working. This is especially common after
working ideas have been published, either in academic research or in
literature written for practitioners.
Optimization
Though this is not a book on quantitative system design, a brief discussion
of the value and perils of optimization is in order. (System optimization is
also sometimes wrongly referred to as “curve fitting”.) In a nutshell,
optimization is simply taking trading systems and changing inputs until you
find a set of conditions that would have performed well on historical data. In
the case of the moving average crossover system just discussed, using a 65-
period short moving average and a 170-period long moving average would
have made the system profitable over the 1986 to 2010 window, producing a
profit of $169,209 in over just 35 trades, with an impressive 40% win ratio.
These numbers were found through an exhaustive search of many possible
combinations of moving averages, but had we also searched for the inverse of
this system (allowing shorting when the slow moving average crossed over
the long and vice versa), we could have found combinations that made well
over a million dollars during the same time period. The danger of
overoptimization is that the best historical values will rarely be the best values
in a walk-forward test or in actual trading. In the worst case, it is possible to
create an optimized system that looks incredible on historical data, but will be
completely worthless in actual trading.
Overoptimization can be accidental or deliberate. In the case of system
vendors selling trading systems to the public, optimized systems can produce
very impressive track records. Some systems are built by system designers
who lack the education and experience to avoid this trap; they may truly
believe they are producing something of value and are surprised when the
actual trading results do not match their backtests. More often, overoptimized
systems are created in a nefarious attempt to extract money from the public.
Designers can build and optimize a trading system in a single weekend that
shows dramatic results on a handful of markets. If they can sell a few copies
of the system for $2,000 to $3,000, that’s not a bad return for a few days’
work.
In these examples, the method and effect of optimization is obvious, but it
can also be more subtle and much harder to detect. It is even possible to
overoptimize a simple market study yourself, without realizing you are doing
so. The term overoptimization implies that there might be an appropriate
degree of optimization—which is correct. Some optimization is necessary and
is actually a vital component of the learning process. For instance, if you
wanted to do a research project on gap openings what would be the
appropriate gap size to study? You might first start with 400 percent, but
would quickly discover that there may not be a single market gap that size in
your entire test universe. If you next look at 0.25% gaps, you will discover
that these are so common as to be meaningless; eventually you might settle on
gap openings between 5 and 20%. Each step does bring some dangers because
you are potentially overspecifying the question, but this is also how you learn
about the market’s movements.
movements. Here are some guidelines that will help you to
guard against overoptimization:
This is a deep subject and we have only scratched the surface. For the
automated system designer, this is an important area of study, and ideas like
systems that reoptimize themselves on a walk-forward basis, optimizing for
outcomes other than maximum net profit or considering the results of
optimization tests as multidimensional surfaces, are an important part of that
study. Most discretionary traders will only use quantitative studies as a
departure point for developing trading ideas, so it probably is sufficient to
have an idea about the most serious dangers and risks of optimization. When
in doubt, disregard the results of any study that you feel may be the result of
overoptimization or overspecification. Better to simply know you do not
know than to be misled by spurious information.
Equi
Equitie
tiess Futu
Future
ress Fore
Forex
x Rand
Random
om Total
otal
Down
Up
All
The results are not impressive for this trend indicator. Considering the
random column first to better understand the baseline, we do see a negative
excess return for the downtrend and a positive return for the uptrend
condition, with a slightly higher chance of close up (51.7% of days close up
in uptrend condition versus 51.4% for all days. (This is not statistically
significant.) Volatility is slightly higher for the downtrend, but roughly in line
across all groups. Turning to equities, we find something surprising: the
downtrend shows a very large, over 3 percent, positive excess return, while
the uptrend shows well over a 1% negative excess return; this is precisely the
opposite of what we should see if the uptrend indicator is valid. In fact, for
equities, this suggests we might be better off taking long trades in the
downtrend condition because we would be aligned with a favorable statistical
tailwind. Futures show a situation that is more like what we would expect,
with a fairly large negative excess return for downtrend, and a large positive
excess return for uptrend. Forex, paradoxically enough, looks more like
equities, but the actual excess returns are very small, and are not statistically
significant.
How can this be? If you try this experiment yourself, put a 50-period
moving average on a chart, and just eyeball it, you will see that the slope of
the moving average identifies great trend trades. It will catch every extended
trend trade and will keep you in the trade for the whole move—actually, for
the whole move and then some, and there’s the rub. The problem is the lag,
the same problem that any derived indicator faces. Whether based on moving
averages, trend lines, linear regression lines, or extrapolations of existing
data, they can respond to changes in the direction of momentum of prices
only after those changes have happened. A moving average slope indicator
will also get whipsawed frequently when the market is flat and the average is
rapidly flipping up and down. It is possible to introduce a band around the
moving average to filter some of this noise, but this will be at the expense of
making valid signals come even later.
Figure 14.13 illustrates the problem with a 50-period moving average
applied to a daily chart of the U.S. Dollar Index. It would have been slightly
profitable to trade this simple trend indicator on this particular chart, but
notice how much of the move is given up before the indicator flips. The chart
begins with the market in an uptrend (moving average sloping up), and nearly
one-third of the entire chart has to be retraced before the moving average flips
down. Once the market bottoms in November, a substantial rally ensues
before the trend indicator flips up. This lag, coupled with the fact that markets
tend to make sharp reversals from both bottoms and tops, greatly reduces the
utility of this tool as a trend indicator.
indicator.
Figure
Figure 14.13 Slope of 50-Period Moving
Moving Average
Average as a Tren
Trend
d Indicator
Notice how much of each trend
trend move is given up by
by this tool.
Another common idea is to use the position of two or more moving
averages to confirm a trend change. For example, three moving averages of
different lengths could be applied to a chart, and the market could be assumed
to be in an uptrend when the averages are in the correct order, meaning that
the shortest average would be above the medium-length average and both of
those would be above the longer-term moving average, with the reverse
conditions being used for a downtrend. This type of plan allows for
significant stretches of time when the trend is undefined; for instance, when
the medium-length average is above the longer-term average, but the shortest
average is in between the two. This, like all moving average crosses, is
attractive visually because the eye is always drawn to big winners, to the clear
trends that this tool catches. However, like all moving average crosses, the
whipsaws erode all profits in most markets, leaving the tool with no
quantifiable edge. In addition, more moving averages usually introduce more
lag, with no measurable improvement compared to a simple moving average
crossover.
One of the most popular moving average trend indicators today is based on
simple 10-, 20-, and 50-period moving averages. Traders using this tool are
told to take long trades only when it indicates an uptrend and to short only
when it indicates a downtrend. It is reasonable to ask how the market behaves
in both of those conditions. Table 14.11 shows that traders using this tool in
Equities (and it is primarily used by stock traders) will consistently find
themselves on the wrong side of the market, fighting the underlying statistical
tendency. Simply put, stocks are more likely to go down when this tool flags
an uptrend, and up when it flags a downtrend—traders using it as prescribed
are doing exactly the wrong thing. For the other asset classes, the message is
mixed. There is possibly an edge in futures, particularly on the short side, and
forex looks more random than the actual randomly generated test set. At least
in this sample of markets, this test suggests that traders relying on this trend
tool or on tools derived from it are likely to have a difficult time overcoming
these headwinds.
Equi
Equiti
ties
es Futu
Future
ress Fore
Forex
x Rand
Random
om Total
otal
Down
Up
All
Table 14.12 Buy and Hold Compared to Long-Only above 200-Day Moving
Average,
DJIA, 1960–2010
Conclusions
This section has looked at many variations of tests on moving averages. On
one hand, the answers were not crystal clear because there were some
interesting and statistically significant tendencies in some of the tests.
However, the same tendencies are present regardless of the specific period of
average tested, even if the length of the average changes randomly from bar
to bar, and often even without the moving average being present. This
evidence strongly contradicts the claim that any one moving average is
significant or special. Traders depend on moving averages because they are
lines on their charts and they sometimes seem to support prices, but this is a
trick of the eye. If you are depending on moving averages, considering the 50-
, 10-, or 100-day moving average to be support or resistance, you are trading a
concept that has no statistical validity.
This section also taught us some potentially useful things about market
movements. For example, we saw that equities, futures, and forex markets
sometimes show significant differences in the way they trade. It probably
does not make sense to approach them all the same way and to trade them
with the same systems and methods. We also saw evidence that some of the
tendencies that seem to be around moving averages may actually be deeper,
more universal elements of price action. For instance, the tendency for some
asset prices to bounce after trading down to a moving average may simply be
the tendency for prices to bounce after trading down. We also took a brief
look at trend indicators derived from moving averages, and saw that they
suffered from enough lag and mean reversion that they may often put the
trader on the wrong side of the market.
variations (e.g., Tuesday-to-Tuesday weeks, or months tied to expiration
cycles in options or futures) are even used in some applications. Here is an
important clue: if we see something that “always works” no matter what we
change or tweak, our first assumption should be that there is some error in our
thinking. The other alternative, that the tendency is so strong and the pattern
so powerful, easily descends into magical thinking and becomes a self-
sustaining proof. (A parallel outside of markets might be a panacea medicine
—a powerful medicine that can cure any ill in any range of unrelated
conditions, but can do no harm, regardless of how much is taken. This type of
medicine does not exist, and any medicine not capable of doing harm most
likely does nothing at all.)
If you can randomly define the parameters and get the same result, isn’t that
suggestive of some random process at work? This should be our first warning
that perhaps our intuition is faulty, and there is something else going on here.
Figure 15.2 shows a random walk path with the with the maximum,
minimum, and open marked. The next tests will replicate this procedure
thousands of time to help build intuition about the location of the open under
a random walk.
Figure 15.2 A Single Path Through a Random Walk Tree, with the Max,
Min, and Open Marked
U
p to this point, the goal of this part has been twofold: One, to give
fairly in-depth examples of the kind of thinking and quantitative
analysis that can help to separate the wheat from the chaff, and valid
trading ideas from worthless, random ideas. This is not always simple, as
sometimes even properly defining the question is difficult, and results are
rarely black and white. The second goal has been to dispel some myths about
what works in the market. No moving average consistently provides
statistically significant support. No crossing of moving averages or slope of
moving averages provides a statistically significant trend indicator. What
tendencies we do see around moving averages actually occur in giant zones
around the moving average; we have seen no justification whatsoever for
watching any specific moving average value. There is no point in noting that
a market crosses the 100-day or 200-day moving average. No Fibonacci level
provides any significant reference point—a random number is as good as any
Fibonacci level. None of these things are any better than a coin flip!
Though this may be disheartening to some traders, I believe there is an
important message here—it is better to know you don’t know than to continue
to waste your time and energy trading futile concepts. If you have been using
these concepts in your trading, objectively consider your results. If they are
performing well, meaning that you have substantial and consistent profits
over a large sample size, then you have probably incorporated them into a
framework that includes other inputs, and your positive results depend on
many more elements. However, if you are struggling and are not pleased with
your results, maybe it is time to reevaluate the tools you are using. Give up
your preconceptions and your beliefs, and commit to finding what does work
in the market. Struggling traders using futile concepts have a simple choice—
let go of beliefs and preconceptions that or holding you back, or let go of your
money. Let’s now turn to some ideas that do have validity, that reveal
significant truths about the nature of the market’s movements.
Mean Reversion
Mean reversion is a term used in several different contexts to explain the
markets’ tendency to reverse after large movements in one direction. In its
trivial form, traders say that price returns to a moving average. This is true, as
Figure 16.31 shows, but it is not always a useful concept, because there are
two ways to get to the average: price can move to the average or the average
can move to price. Since traders are paid on price movement, the second case
will usually result in trading losses. All of the points marked C in Figure
16.31 would have presented profitable mean-reversion trades, assuming that
traders had some tool to identify the points where price had moved a
significant distance from the moving average. They could simply have faded
these moves by shorting above the average, buying below, and exiting the
market when it came back to the moving average. This is the pattern that most
traders have in mind when they speak of mean reversion, but they forget the
possibilities of the sequence marked A to B. Here, the trader shorting at A
would eventually have exited the trade when price did, in fact, revert to the
average at point B, but the average was so far above the entry price that a
substantial loss would have resulted.
Mean reversion exists in two slightly different contexts: mean reversion
after a single large move (usually one bar on a chart) or mean reversion after a
more extended move (usually multiple bars on a chart). In reality, these are
the same concept on different time frames: what looks like a large multibar
move will usually resolve into a single large bar on a higher time frame. A
single large bar will usually include multibar trends on the lower time frame.
The bar divisions and time frames that traders create are more or less arbitrary
divisions; one of the skills discretionary traders work hard to develop is the
ability to see beyond those divisions to perceive the flow of the market for
what it really is.
Table 16.8 Trading 20-Day Channel Breakouts (Entry on Close, Five Days
between Consecutive Entries)
You may have wondered what a test of breakout trades is doing in a section
on mean reversion. The reason is simple: failed breakout trades are evidence
of mean reversion. Table 16.8 is another strong affirmation of mean reversion
in equities, showing negative returns for our buy signal and positive returns
for shorts—the signal conditions were exactly wrong for this sample. It
appears that there may be an exploitable opportunity by shorting new 20-day
highs and buying new 20-day lows in stocks, fading the channel breakout.
However, it would be a mistake to assume that you could apply the same
system to futures and forex just because it works in equities. The futures
sample seems to suggest an edge in going with the direction of the breakout.
Positive returns for buys and negative returns for shorts, though probably not
statistically significant (meaning that the trader actually trading these would
see extreme variability in the results), suggest that fading the channel
breakout in futures could be painful. This is also the first consistent signal we
have seen in forex, as all of the returns on the sell side are negative. However,
the small size of these returns, a few basis points at most, is a warning that
this may not be an easily exploitable tendency.
Twenty-day channel breakouts are common, occurring on approximately 4
percent of all trading days; using a longer period for the breakout might result
in more significant levels, so Table 16.9 shows the results of a 100-day
channel breakout. At this point, futures and forex finally start to show
something interesting, and we see that positioning with the direction of the
breakout is clearly the correct trade in these markets. Particularly in forex, the
signal size is small, but means and medians are consistently on the same side
of zero, and the series appears to be flirting with statistical significance. Is this
a stand-alone trading system? Probably not, but it is pretty strong evidence of
an underlying tendency in the market. Note that mean reversion is still alive
and well in equities in this test—this is perhaps the clearest evidence so far of
different behavior between these asset classes.
Table 16.9 Trading 100-Day Channel Breakouts (Entry on Close, Five Days
between Consecutive Entries)
Equities—Buy Futures—Buy
Equities—Sell Futures—Sell
First of all, note that forex is excluded from this test. The reason is that
volatility fluctuates differently in forex, and these conditions produced only
five long trades and two short trades for forex. It would be extremely
misleading to draw conclusions from such a small sample; volatility
compression is alive and well in forex, but this particular way to quantify it
does not work very well. Leaving that aside for now, this test is one of the
most convincing we have seen so far. Means and medians are consistently on
the correct side of zero, which suggests that there is a real, underlying
tendency driving this trade. Two factors complicate this analysis. First,
sample sizes are small across the board, as this setup occurs approximately
once in every 500 trading days. Second, it is by definition a volatile trade,
with a wide dispersion of returns.
Up to this point, every test on equities has shown a clear tendency for mean
reversion—based on those past tests, it seems as though you could actually
trade equities simply by fading large moves. However, the addition of a very
simple filter has completely changed the results, and we have now identified a
subset of those large days that are more likely to continue in the same
direction. Furthermore, this filter condition seems to strengthen the tendency
for continuation in futures as well. Can we use this information to filter out
profitable breakout trades? Could we also use it to increase the probability of
success of mean reversion trades, by not taking them in times of volatility
compression? The answer to both questions is a resounding yes.
Pullbacks
The other condition that can set a market up for a range expansion move is
a pullback after a sharp directional move. What usually happens is the large
move exhausts itself, mean reversion takes over, and part of the move is
reversed while the market reaches an equilibrium point. After a period of
relative rest, the original movement reasserts itself and the market makes
another thrust in the initial direction. (We looked at this structural tendency in
some detail in the section on Fibonacci retracements, and it was one of the
most important trading patterns from earlier sections of this work.) The
concept of impulse, retracement, impulse is valid—it actually is one of the
most important patterns in the market.
We also saw earlier in this chapter that expecting moving averages to
provide support and resistance is not likely to be a path to profitable trading.
However, there is more to the story. A moving average does mark a position
of relative equilibrium and balance, but the key question is “relative to what?”
The answer—relative to the market’s excursions from that particular average
—implies that some way must be found to standardize those swings and the
distances from the average. Fortunately, both Keltner channels and Bollinger
bands present an ideal way to do this. They adapt to the volatility of the
underlying market, and so, when properly calibrated, they mark significant
extensions in all markets and all time frames. Figure 16.8 shows four short
entries according to the following criteria:
Shorts are allowed after the market closes below the lower
Keltner channel.
The entry trigger is a touch of the 20-period exponential
moving average.
Only one entry is allowed per touch of the channel; once a short
entry has been taken, price must again close outside the lower
channel to set up another potential short.
Rules are symmetrical to the buy side.
Fisher, Mark B. The Logical Trader: Applying a Method to the Madness.
Hoboken, NJ: John Wiley & Sons, 2002.
Grimes, Adam H. The Art and Science of Technical Analysis: Market
Structure, Price Action, and Trading Strategies. Hoboken, NJ: John Wiley
& Sons, 2012.
Grossman, S., and J. Stiglitz. “On the Impossibility of Informationally
Efficient Markets.” American Economic Review 70 (1980): 393–408.
Harris, Larry. Trading and Exchanges: Market Microstructure for
Practitioners. New York: Oxford University Press, 2002.
Hintze, Jerry L., and Ray D. Nelson. “Violin Plots: A Box Plot-Density Trace
Synergism.” American Statistician 52, no. 2 (1998): 181–184.
Jung, C. G. Psychology and Alchemy. Vol. 12 of Collected Works of C. G.
Jung. Princeton, NJ: Princeton University Press, 1980.
Kelly, J. L., Jr. “A New Interpretation of Information Rate.” Bell System
Technical Journal 35 (1956): 917–926.
Kirkpatrick, Charles D., II. Technical Analysis: The Complete Resource for
Financial Market Technicians. Upper Saddle River, NJ: FT Press, 2006.
Langer, E. J. “The Illusion of Control.” Journal of Personality and Social
Psychology 32, no. 2 (1975): 311–328.
Lo, Andrew. “Reconciling Efficient Markets with Behavioral Finance: The
Adaptive Markets Hypothesis.” Journal of Investment Consulting,
forthcoming.
Lo, Andrew W., and A. Craig MacKinlay. A Non-Random Walk Down Wall
Street. Princeton, NJ: Princeton University Press, 1999.
Lucas, Robert E., Jr. “Asset Prices in an Exchange Economy.” Econometrica
46, no. 6 (1978): 1429–1445.
Macnamara, Brook N., David Z. Hambrick, and Frederick L. Oswald.
“Deliberate Practice and Performance in Music, Games, Sports, Education,
and Professions: A Meta-Analysis.” Pyschological Science Vol 25, Issue 8
(2014), pp. 1608–1618.
Malkiel, Burton G. “The Efficient Market Hypothesis and Its Critics.”
Journal of Economic Perspectives 17, no. 1 (2003): 59–82.
Mandelbrot, Benoît, and Richard L. Hudson. The Misbehavior of Markets: A
Fractal View of Financial Turbulence. New York: Basic Books, 2006.
Mauboussin, Michael J. “Untangling Skill and Luck: How to Think about
Outcomes—Past, Present and Future.” Maboussin on Strategy, Legg Mason
Capital Management, July 2010.
Miles, Jeremy, and Mark Shevlin. Applying Regression and Correlation: A
Guide for Students and Researchers. Thousand Oaks, CA: Sage
Publications, 2000.
Niederhoffer, Victor. The Education of a Speculator. New York: John Wiley &
Sons, 1998.
Plummer, Tony. Forecasting Financial Markets: The Psychology o
Successful Investing, 6th ed. London: Kogan Page, 2010.
Raschke, Linda Bradford, and Laurence A. Conners. Street Smarts: High
Probability Short-Term Trading Strategies. Jersey City, NJ: M. Gordon
Publishing Group, 1996.
Schabacker, Richard Wallace. Technical Analysis and Stock Market Profits.
New York: Forbes Publishing Co., 1932 .
Schabacker, Richard Wallace. Stock Market Profits. New York: Forbes
Publishing Co., 1934.
Schwager, Jack D. Market Wizards: Interviews with Top Traders. New York:
HarperCollins, 1992.
Snedecor, George W., and William G. Cochran. Statistical Methods, 8th ed.
Ames: Iowa State University Press, 1989.
Soros, George. The Alchemy of Finance: Reading the Mind of the Market.
New York: John Wiley & Sons, 1994.
Sperandeo, Victor. Trader Vic: Methods of a Wall Street Master. New York:
John Wiley & Sons, 1993.
Sperandeo, Victor. Trader Vic II: Principles of Professional Speculation. New
York: John Wiley & Sons, 1998.
Steenbarger, Brett. Enhancing Trader Performance: Proven Strategies From
the Cutting Edge of Trading Psychology.New York: John Wiley & Sons,
2008.
Taleb, Nassim. Fooled by Randomness: The Hidden Role of Chance in Life
and in the Markets. New York: Random House, 2008.
Tsay, Ruey S. Analysis of Financial Time Series. Hoboken, NJ: John Wiley &
Sons, 2005.
Vince, Ralph. The Leverage Space Trading Model: Reconciling Portfolio
Management Strategies and Economic Theory. Hoboken, NJ: John Wiley &
Sons, 2009.
Waitzkin, Josh. The Art of Learning: An Inner Journey to Optimal
Performance. New York: Free Press, 2008.
Wasserman, Larry. All of Nonparametric Statistics. New York: Springer,
2010.
Wilder, J. Welles, Jr. New Concepts in Technical Trading Systems.
McLeansville, NC: Trend Research, 1978.