Trend Follower
Trend Follower
Trend Follower
0.618
1.618
Table of Contents
Journal Editor & Reviewers 3
Editor
Associate Editor
Manuscript Reviewers
Julie Dahlquist, Ph.D., CMT Cynthia Kase, CMT Michael J. Moody, CMT
University of Texas Kase and Company, Inc. Dorsey, Wright & Associates
San Antonio, Texas Albuquerque, New Mexico Pasadena, California
Journal of Technical Analysis is published by the Market Technicians Association, Inc., (MTA) 61 Broadway, Suite 514, New York, NY 10006. Its purpose is to
promote the investigation and analysis of the price and volume activities of the world’s financial markets. Journal of Technical Analysis is distributed to individuals
(both academic and practitioner) and libraries in the United States, Canada and several other countries in Europe and Asia. Journal of Technical Analysis is copyrighted
by the Market Technicians Association and registered with the Library of Congress. All rights are reserved.
Respectfully,
Connie Brown, CMT
1. All submitted manuscripts must be original work that is not under be self-contained. Each figure must have a title followed by a descriptive
submission at another journal or under consideration for publication legend. Final figures for accepted papers must be submitted as either *.jpg
in another form, such as a monograph or chapter of a book. Authors of or *.bmp files.
submitted papers are obligated not to submit their paper for publication
elsewhere until the Journal of Technical Analysis renders an editorial 10. Equations: All but very short mathematical expressions should be
decision on their submission. Further, authors of accepted papers are displayed on a separate line and centered. Equations must be numbered
prohibited from publishing the results in other publications that appear consecutively on the right margin, using Arabic numerals in parentheses.
before the paper is published in the Journal of Technical Analysis, unless Use Greek letters only when necessary. Do not use a dot over a variable
they receive approval for doing so from the editor. Upon acceptance of to denote time derivative; only D operator notations are acceptable.
the paper for publication, we maintain the right to make minor revisions
or to return the manuscript to the author for major revisions. 11. References: References to publications in the text should appear as follows:
“Jensen and Meckling (1976) report that ….”
2. Authors must submit papers electronically in Word (*.doc) format. All
figures (charts) in *.jpg or *.bmp format to the editor, Con- References must be typed on a separate page, double-spaced, in
nie Brown, ([email protected]). Manuscripts must be clearly typed alphabetical order by the leading author’s last name. At the end of the
with double spacing. The pitch must not exceed 12 characters per inch, manuscript (before tables and figures), the complete list of references
and the character height must be at least 10 points. should be listed in the formats that follow:
3. The cover page shall contain the title of the paper and an abstract of not For monographs or books:
more than 100 words. The title page should not include the names of the Fama, Eugene F., and Merton H. Miller, 1972, The Theory of Finance
authors, their affiliations, or any other identifying information. That (Dryden Press, Hindsdale, IL)
information plus a short biography including educational background,
professional background, special designations such as Ph.D., CMT, CFA, For contributions to major works:
etc., and present position and title must be submitted on a separate page. Grossman, Sanford J., and Oliver D. Hart, 1982, Corporate financial
structure and managerial incentives, in John J. McCall, ed.: The Economics
4. An acknowledgement footnote should not be included on the paper but of Information and Uncertainty (University of Chicago Press, Chicago, IL)
should also be submitted on a separate page.
For Periodicals:
5. The introductory section must have no heading or number. Subsequent Jensen, Michael C., and William H. Meckling, 1976, Theory of the
headings should be given Roman numerals. Subsection headings should firm: Managerial behavior, agency costs and ownership structure, Journal
be lettered A, B, C, etc. of Financial Economics 3, 305-360
6. The article should end with a non-technical summary statement of the
Please note where words are CAPITALIZED, italics are used, (parentheses)
main conclusions. Lengthy mathematical proofs and very extensive
are used, order of wording, and the position of names and their order.
detailed tables or charts should be placed in an appendix or omitted entirely.
The author should make every effort to explain the meaning of
mathematical proofs.
1
Market Prognostication
In his treatise on stock market patterns, the late Professor Harry V. Roberts1 observed that “of all economic time series, the history of stock prices, both individual
and aggregate, has probably been most widely and intensively studied,” and “patterns of technical analysis may be little if nothing more than a statistical artifact.”2
Ibbotson and Sinquefield maintain that historical stock price data cannot be used to predict daily, weekly or monthly percent changes in the market averages.
However, they do claim the ability to predict in advance the probability that the market will move between +X% and -Y% over a specific period.3 Only to this very
limited extent – forecasting the probabilities of return – can historical stock price movements be considered indicative of future price movements.
In Chart 1, we present a histogram of the five-day rate of change (ROC) in the S&P 500 since 1928. The five-day ROC of stock prices has ranged from -27% to
+ 24%. This normal distribution4 is strong evidence that five-day changes in stock prices are effectively random. Out of 21,165 observations of five-day ROCs, there
have been 138 declines exceeding -8%, (0.65% of total) and 150 gains greater than +8% (0.71% of total). Accordingly, Ibbotson and Sinquefield would maintain that
over any given 5-day period, the probability of the S&P 500 gaining or losing 8% or more is 1.36%. Stated differently, the probabilities of the S&P 500 returning
between -7.99% and +7.99% are 98.64%.
Professor Jeremy Siegel adopts this idea. Siegel states, “The total return on equities dominates all other assets.”5 Based on probabilities, we can be nearly certain
that over the long-term, stocks will outperform bonds, gold, commodities, inflation, real estate, and other tradable investments.
Are these ideas true? Are stock price movements effectively random? Do historical stock market returns indicate probabilities of future returns? Can statistical
analysis tell us that the equities market will continue to outperform all other assets? Can stock market data never indicate that over a given period of time the market
will increase at a rate greater than its historical gain? Can stock market data never point toward the probability of a decline overwhelming the probability of a
rally?
Chart 1
1
Graduate School of Business, University of Chicago 1949-1992
2
The Journal of Finance, Vol. 14, No. 1 (Mar., 1959), Roberts does admit that “phenomena that can be only described as chance today,” such as the behavior of stock prices and the emission of alpha
particles in radioactive decay, “may ultimately be understood in a deeper sense.”
3
Stocks, Bonds, etc: 1989 edition. Ibbotson & Sinquefield Ch. 10
4
The true normal distribution is a mathematical abstraction, never perfectly observed in nature
5
Stocks for the Long Run, J. Siegel
17
Full House by Stephen Jay Gould, pages 149-151
18
See The Essays of Warren Buffet. Cunningham, Pg. 65
84 Congress, 1st session, “Factors Affecting the Buying and Selling of Securities,” March 11, 1955
19 th
20
Technical disciplines are indeed a mystery. We do know from experience though, that these disciplines work
Chart 2 displays an arrow each time the S&P 500 has rallied 8%21 or more over a five- day period. See Appendix 1 for all signal dates.
Chart 2
Five-Day ROC +8%
Note that the periods during which those extraordinary events occur are often proximate significant turning points.22
We are not the first to notice the predictive ability of this raw Five-day ROC data
21
Readers should note that prior to March, 1957, the S&P 500 consisted of only 90 stocks and was therefore less suitable to general market analysis
22
Table 1
Five-Day ROC 8% or greater 4-6 days after 90-day low
Recognizing the existence of reflecting boundaries and using price and time data alone, we have created an indicator in the S&P 500 Index that signaled within
four to six days of the historic lows of:
And that signaled within four days of the triple bottom that began the latest bull market:
23
William J. O’Neill has elaborated on this concept in his market studies.
Note that these signals, which use a negative 8% parameter, often occur directly proximate a significant turning point.
Using these -8% five-day ROC signal dates, we eliminate all signals that are not proximate to potential lows. We therefore include only those signals that take
place as the market is trading at a maximum of one day24 after a six month low. All signal dates that are two days or more after a six-month low are eliminated.
Having utilized the two main legs of technical analysis, price and time, we will now introduce the third leg of technical analysis, volume. Five-day market
volume can be represented by a standard curve, yet significant increases in market volume are not randomly distributed. The following (Chart 4) indicates each time
the five-day average of daily volume was highest in 250 days. Out of 20,876 observations since 1929, there have been 425 instances (2.04% of total) of five-day
average daily volume at a 250 day high. See Appendix 4 for all dates on which this occurred.
Chart 4
Five-Day Volume Highers in 250 Days
24
Oversold action may signal within one day of a low. Only thrust action within three days of a low is suspect
Table 2 Table 3
Five-Day ROC -8% Five-Day ROC -8%
Six Month Low Six Month Low
250-Day Volume High 250-Day Volume High
Last Signal in Series
This method can be refined further. We wait until a series of five-day 8% declines ends. Since we cannot know when that final day of a series occurs until a day
after the series ends, we set our signal dates as one day after a -8% ROC extreme. To accommodate this adjustment we allow our buy signal to lag the 250-day volume
boundary and the six-month low boundary by a maximum of seven days. (see table 3)
Recognizing the existence of reflecting boundaries, and using price, volume and time alone, we have created an indicator in the S&P 500 Index that signaled
within four-days of the historic lows of: November 13, 1929; October 19, 1987; July 23, 2002; and near the final low of June 26,1962.
Appendix 7
Table 4a SELL SIGNAL
Milton W. Berg, CFA, is a Market Analyst for Duquesne Capital Management, L.L.C.
2
When securities change hands on a securities auction market, the volume of shares bought always matches the volume sold on executed orders. When the price
rises, the upward movement reflects demand exceeds supply or that buyers are in control. Likewise, when the price falls it implies supply exceeds demand or that
sellers are in control. Over time, these trends of supply and demand form accumulation and distribution patterns. What if there was a way to look deep inside price
and volume trends to determine if current prices were supported by volume. This is the objective of the Volume Price Confirmation Indicator (VPCI), a methodology
that measures the intrinsic relationship between price and volume.
The Volume Price Confirmation Indicator or VPCI exposes the relationship between the prevailing price trend and the volume, as either confirming or
contradicting the price trend, thereby giving notice of possible impending price movements. This paper discusses the derivation and components of the VPCI, and
explains how to use the VPCI. We also review comprehensive testing of the VPCI, and presents further applications using the indicator.
In exchange markets, price results from an agreement between buyers and sellers to exchange, despite their different appraisals of the exchanged item’s value.
One opinion may have legitimate fundamental grounds for evaluation; the other may be pure nonsense. However, to the market, both are equal. Price represents the
convictions, emotions and volition of investors.1 It is not a constant, but rather is changed and influenced over time by information, opinions and emotions.
Market volume represents the number of shares traded over a given time period. It is a measurement of the participation, enthusiasm, and interest in a given
security. Volume can be thought of as the force that drives the market. Force or volume is defined as power exerted against support or resistance.2 In physics, force
is a vector quantity that tends to produce acceleration.3 The same is true of market volume. Volume substantiates, energizes, and empowers price. When volume
increases, it confirms price direction; when volume decreases, it contradicts price direction. In theory, increases in volume generally precede significant price
movements. This basic tenet of technical analysis, that volume precedes price, has been repeated as a mantra since the days of Charles Dow.4 Within these two
independently derived variables, price and volume, exists an intrinsic relationship. When examined conjointly, price and volume give indications of supply and
demand that neither could provide independently.
short-term average volume for 10 days is 1.5 million shares a day, and the long-term average volume for 50 days is 750,000 shares per day. The VM equals 2
(1,500,000/750,000). This calculation is then multiplied by the VPC+/- after it has been multiplied by the VPR. Now we have all the information necessary to
calculate the VPCI. The VPC+ confirmation of +1.5 is multiplied by the VPR of 1.25, giving 1.875. Then 1.875 is multiplied by the VM of 2, giving a VPCI of 3.75.
Although this number is indicative of an issue under very strong volume-price confirmation, this information serves best relative to the current and prior price trend
and relative to recent VPCI levels. Next, we discuss how to properly use the VPCI.
A falling price trend reveals a market driven by fear. A falling price trend without volume reveals apathy, fear without increasing energy. Unlike greed, fear is
self-sustaining, and may endure for long time periods without increasing fuel or energy. Adding energy to fear can be likened to adding fuel to a fire and is generally
bearish until the VPCI reverses. In such cases, weak-minded investor’s, overcome by fear, are becoming irrationally fearful until the selling climax reaches a state
of maximum homogeneity. At this point, ownership held by weak investor’s has been purged, producing a type of heat death capitulation. These occurrences may
be visualized by the VPCI falling below the lower standard deviation eight of a Bollinger Band of the VPCI, and then rising above the lower band, and forming
a “V” bottom.
Chart 4 The VPCI predicted the last major market pullback in May 2006
Excluding dividends or interest, OBV’s annualized rate of return in the above system was -1.57%, whereas the VPCI’s annualized return was 8.11%, an out-
performance of over 9.5% annualized. The VPCI improved reliability, giving profitable signals over 65 percent of the time, compared to OBV at only 42.86 percent.
Another consideration in evaluating performance is risk. The VPCI had less than half the risk as measured by volatility, 7.42 standard deviations compared to OBV
with 17.4 standard deviations from the mean. It is not surprising, then that the VPCI had much better risk adjusted rates of return. The VPCI’s Sharpe Ratio from
inception was .70 and had a profit factor of 2.47, compared to OBV with a -0.09 Sharpe Ratio and a profit factor of less than 1. Admittedly, this testing environment
What if an investor had just used the MACD buy and sell signals within this same system, without utilizing the VPCI information? In this example, this investor
would have lost out on nearly 12% annualized return, the difference between the VPCI’s positive 8.11% versus the MACD’s negative -3.88% rate of return, while
significantly increasing risk. What if this investor had just employed a buy-and-hold approach? Although this investor would have realized a slightly higher return,
he/she would have been exposed to much greater risks. The VPCI strategy returned nearly 90% of the buy-and-hold strategy return while avoiding about 60%
less risk as measured by standard deviation. Looking at risk-adjusted returns another way, the five year Sharpe Ratio for the SPDR 500 was only .1 compared to
the VPCI system of .74. Additionally, the VPCI investor would have been invested only 35% of the time, allowing the investor the opportunity to invest in other
investments. During the 65% of the time the investor was not invested, he/she would have only needed a 1.84% money-market yield to exceed the buy-and-hold
strategy. Moreover, this investor would have experienced a much smoother performance, without such precipitous capital draw downs. The worst annualized VPCI
return was only a measly -2.71% compared to the underlying investments worst year of -22.81%, more than 20% difference in the rate of return! If an investor had
invested in a money-market instrument, while not invested in the SPDR S&P 500, this VPCI strategy would not have experienced a single down year.
Table 3 Annual Returns by Year
Other Applications
Further testing not covered in this research report suggests the VPCI may be used broadly across most markets exhibiting clear and reliable price and volume
data such as individual equities, exchange traded funds, and broad indices. The raw VPCI calculation may also be used as a multiplier or divider in conjunction
with other indicators, such as moving averages, momentum indicators, or raw price and volume data. For example, if an investor has a trailing stop loss order set
at the five-week moving average of the lows, one could divide the stop price by the VPCI calculation. This would lower the price stop when price and volume are
in confirmation, increasing the probability of keeping an issue under accumulation. However, when price and volume are in contradiction, dividing the stop loss by
the VPCI would raise the stop price, preserving more capital. Similarly, using VPCI as a multiplier to other price, volume, and momentum indicators may not only
improve reliability but it could increase responsiveness as well.
Conclusion
The VPCI reconciles volume and price as determined by each of their proportional weights. This information may be used to deduce likelihood of a current
price trend continuing or reversing. I believe this study clearly demonstrates that adding the VPCI indicator to a trend-following system resulted in consistently
improved performance across all major areas measured by the study. It is my opinion that in the hands of a proficient investor, the Volume Price Confirmation
Indicator is a capable tool providing information which may be useful in potentially accelerating profits, reducing risk and empowering the investor towards sound
investment decisions.
John Ehlers
3
Background
The primary purpose of technical analysis is to observe market events and tally their consequences to formulate predictions. In this sense market technicians are
dealing with statistical probabilities. In particular, technicians often use a type of indicator known as an oscillator to forecast short-term price movements.
An oscillator can be viewed as a high pass filter in that it removes lower frequency trends while allowing the higher frequencies components, i.e., short-term
price swings to remain. On the other hand, moving averages act as a low pass filters by removing short-term price movements while permitting longer-term trend
components to be retained. Thus moving averages function as trend detectors whereas oscillators act in an opposite manner to “de-trend” data in order to enhance
short term price movements. Oscillators and moving averages are filters that convert price inputs into output waveforms to magnify or emphasize certain aspects of
the input data. The process of filtering necessarily removes information from the input data and its application is not without consequences.
A significant issue with oscillators (as well as moving averages) for short term trading is that they introduce lag. While academically interesting, the consequences
of lag are costly to the trader. Lag stems from the fact that oscillators by design are reactive rather than anticipatory. As a result, traders must wait for confirmation;
a process that introduces additional lag into the ability to take action. It is now widely accepted that classical oscillators can be very accurate in hindsight but are
typically inadequate for forecasting future short-term market direction, in large part due to lag.
1. Model the market data as a sine wave and shift the modeled waveform into the future by generating a leading cosine wave from it.
2. Apply a transform to the detrended waveform to isolate the peak excursions, i.e., rare occurrences – and anticipate a short-term price reversion from the peak.
Each of these approaches will be examined below. However it is instructive to begin with an analogy for visualizing a theoretical sine wave PDF and then
examine PDFs of actual market data. As will be shown, market data PDFs are neither Gaussian as commonly assumed nor random as asserted by the Efficient Market
Hypothesis.
Measuring Probability Distribution Functions
An easy way to visualize how a PDF is measured as in figure 2B is to envision the waveform as beads strung on parallel horizontal wires on vertical frames as
shown in Figure 2A. Rotate the wire-frame clockwise 90 degrees (1/4 turn) so the horizontal wires are now vertical allowing the beads to fall to the bottom. The
beads stack up in Figure 2B in direct proportion to their density at each horizontal wire in the waveform with the largest number of occurrences at the extreme turning
points of +1 and -1.
Measuring PDFs of detrended prices using a computer program is conceptually identical to stacking the beads in the wireframe structure. The amplitude of the
detrended price waveform is quantized into “bins” (i.e. the vertical wires) and then the occurrences in each bin are summed to generate the measured PDF. The prices
are normalized to fall between the highest point and the lowest point within the selected channel period.
Figure 3 shows actual price PDFs measured over thirty years using the continuous contract for US Treasury Bond Futures. Note that the distributions are similar
to that of a sine wave in each case. The non-uniform shapes suggest that developing short term trading systems based on sine wave modeling could be successful.
Normalizing prices to their swings within a channel period is not the only way to detrend prices. An alternative method is to sum the up day closing prices
independently from down days. That way the differential of these sums can be normalized to their sum. The result is a normalized channel, and is the generic form
of the classic RSI indicator. The measured PDF using this method of detrending of the same 30 years of US Treasury Bonds data is shown in Figure 4. In this case,
the PDF is more like the familiar bell-shaped curve of a Gaussian PDF. One could conclude from this that a short-term trading system based on cycles would be less
than successful as the high probability points are not near the maximum excursion turning points.
Unlike an oscillator, the Fisher Transform is a nonlinear function with no lag. The transform expands amplitudes of the input waveforms near the -1 and +1
excursions so they can be identified as low probability events. As shown in Figure 6 the transform is nearly linear when not at the extremes. In simple terms, the
Fisher Transform doesn’t do anything except at the low-probability extremes. Thus it can be surmised that if low probability events can be identified, trading
strategies can be employed to anticipate a reversion to normal probability after their occurrence.
The Monte Carlo results for the RSI strategy are shown in Figure 9. The most likely annual profit is $17,085 and the most likely maximum drawdown is $6,219.
Since the profit is higher and the drawdown is lower than for the Channel strategy, the reward to risk ratio is much larger at 2.75. The RSI strategy also has a better
96.6% chance of break even or better on an annualized basis.
The Monte Carlo results for the Fisher strategy are shown in Figure 10. The most likely annual profit is $16,590 and the most likely maximum drawdown is
$6,476. The reward to risk ratio of 2.56 is about the same as for the RSI strategy. The Fisher Transform strategy also has about the same chance of break even or
better at 96.1%.
These studies show that the three trading strategies are robust across time and offer comparable performance when applied to a common symbol. To further
demonstrate robustness across time as well as applying to a completely different symbol, performance was evaluated on the S&P Futures, using the continuous
contract from its inception in 1982. In this case, we show the equity curve produced by trading a single contract without compounding. There is no allowance for
3
MCSPro, Inside Edge Systems, Bill Brower, 200 Broad St., Stamford, CT 06901
The robust performance of these new trading strategies are particularly striking when compared to more conventional trading strategies. For example, Figure 14
shows the equity growth of a conventional RSI trading system that buys when the RSI crosses over the 20% level and sells when the RSI crosses below the 80 %
level. This system also reverses position when the trade has an adverse excursion more than a few percent from the entry price. This conventional RSI system was
optimized for maximum profit over the life of the S&P Futures Contract. Not only has the conventional RSI strategy had huge drawdowns, but its overall profit factor
was only 1.05. Any one of the new strategies I have described offers significantly superior performance over the contract lifetime. This difference demonstrates the
efficacy of the approach and the robustness of these new systems.
Appendices
Appendix A Appendix B
EasyLanguage™ Code to Generate a Normalized Channel PDF EasyLanguage™ Code to Compute a PDF from RSI Detrending
Inputs: Inputs:
Length(20); Length(10);
Vars:
Vars: CU(0), CD(0), I(0), MyRSI(0), Psn(0);
HH(0), LL(0), Count(0), Psn(0), I(0); Arrays:
Bin[100](0);
Arrays:
Bin[100](0); If CurrentBar > Length Then Begin
CU = 0;
If CurrentBar > Length Then Begin CD = 0;
HH = Close; For I = 0 to Length - 1 Begin
LL = Close; If Close[I] - Close[I + 1] > 0 Then CU = CU + Close[I] - Close[I + 1];
For Count = 0 to Length - 1 Begin If Close[I] - Close[I + 1] < 0 Then CD = CD + Close[I + 1] - Close[I];
If Close[Count] > HH Then HH = Close[Count]; End;
If Close[Count] < LL Then LL = Close[Count]; If CU + CD <> 0 Then MyRSI = 50*((CU - CD) / (CU + CD) + 1);
End; Psn = (MyRSI + 2*MyRSI[1] + MyRSI[2]) / 4;
If HH <> LL Then Value1 = 100*(Close - LL) / (HH - LL); For I = 1 to 100 Begin
Psn = (Value1 + 2*Value1[1] + Value1[2]) / 4; If Psn > I - 1 and Psn <= I Then Bin[I] = Bin[I] + 1;
For I = 1 to 100 Begin End;
If Psn > I - 1 and Psn <= I Then Bin[I] = Bin[I] + 1; Plot1(psn);
End; End;
Plot1(Psn);
End; If LastBarOnChart Then Begin
For I = 1 to 100 Begin
If LastBarOnChart Then Begin Print(File(“C:\TSGrowth\PDF_RSI.CSV”), I, “,”, Bin[I]);
For I = 1 to 100 Begin End;
Print(File(“C:\TSGrowth\ChannelPDF.CSV”), I, “,”, Bin[I]); End;
End;
End;
Appendix D
Appendix C EasyLanguage™ Code to Compute a PDF by HighPass Filtering and Fisher Transform
EasyLanguage™ Code to Compute a PDF by HighPass Filtering Inputs:
HPPeriod(40);
Inputs:
HPPeriod(40); Vars:
alpha(0), HP(0), HH(0), LL(0), Count(0), Psn(0), Fish(0), I(0);
Vars:
alpha(0), HP(0), HH(0), LL(0), Count(0), Psn(0), I(0); Arrays:
Bin[100](0);
Arrays:
Bin[100](0); alpha = (1 - Sine (360 / HPPeriod)) / Cosine(360 / HPPeriod);
HP = .5*(1 + alpha)*(Close - Close[1]) + alpha*HP[1];
alpha = (1 - Sine (360 / HPPeriod)) / Cosine(360 / HPPeriod); IF CurrentBar = 1 THEN HP = 0;
HP = .5*(1 + alpha)*(Close - Close[1]) + alpha*HP[1];
IF CurrentBar = 1 THEN HP = 0; If CurrentBar > HPPeriod Then Begin
HH = HP;
If CurrentBar > HPPeriod Then Begin LL = HP;
HH = HP; For Count = 0 to HPPeriod - 1 Begin
LL = HP; If HP[Count] > HH Then HH = HP[Count];
For Count = 0 to HPPeriod - 1 Begin If HP[Count] < LL Then LL = HP[Count];
If HP[Count] > HH Then HH = HP[Count]; End;
If HP[Count] < LL Then LL = HP[Count]; If HH <> LL Then Value1 = 2*((HP - LL) / (HH - LL) - .5);
End; Psn = (Value1 + 2*Value1[1] + Value1[2]) / 4;
If HH <> LL Then Value1 = 100*((HP - LL) / (HH - LL)); If Psn > .999 Then Psn = .999;
Psn = (Value1 + 2*Value1[1] + Value1[2]) / 4; If Psn < -.999 Then Psn = -.999;
For I = 1 to 100 Begin Fish = 16*(.5*Log((1 + Psn) / (1 - Psn)) + 3);
If Psn > I - 1 and Psn <= I Then Bin[I] = Bin[I] + 1; If Fish < 0 then Fish = 0;
End; If Fish >100 then Fish = 100;
Plot1(Psn); For I = 1 to 100 Begin
End; If Fish > I - 1 and Fish <= I Then Bin[I] = Bin[I] + 1;
End;
If LastBarOnChart Then Begin Plot1(Fish);
For I = 1 to 100 Begin End;
Print(File(“C:\TSGrowth\PDF_HP.CSV”), I, “,”, Bin[I]);
End; If LastBarOnChart Then Begin
End; For I = 1 to 100 Begin
Print(File(“C:\TSGrowth\PDF_HPFisher.CSV”), I, “,”, Bin[I]);
End;
End;
Vars: Vars:
HH(0), LL(0), Count(0), gamma(0), alpha(0), beta(0), delta(.1), BP(0), alpha(0), HP(0), HH(0), LL(0), Count(0), Psn(0), Fish(0), I(0);
Lead(0);
alpha = (1 - Sine (360 / HPPeriod)) / Cosine(360 / HPPeriod);
If CurrentBar > Length Then Begin HP = .5*(1 + alpha)*(Close - Close[1]) + alpha*HP[1];
HH = Close; IF CurrentBar = 1 THEN HP = 0;
LL = Close;
For Count = 0 to Length - 1 Begin If CurrentBar > HPPeriod Then Begin
If Close[Count] > HH Then HH = Close[Count]; HH = HP;
If Close[Count] < LL Then LL = Close[Count]; LL = HP;
End; For Count = 0 to HPPeriod - 1 Begin
If HH <> LL Then Value1 = (Close - LL) / (HH - LL) - .5; If HP[Count] > HH Then HH = HP[Count];
If HP[Count] < LL Then LL = HP[Count];
beta = Cosine(360 / Length); End;
gamma = 1 / Cosine(720*delta / Length); If HH <> LL Then Value1 = 2*((HP - LL) / (HH + LL) - .5);
alpha = gamma - SquareRoot(gamma*gamma - 1); Psn = (Value1 + 2*Value1[1] + Value1[2]) / 4;
BP = .5*(1 - alpha)*(Value1 - Value1[2]) + beta*(1 + alpha)*BP[1] - If Psn > .999 Then Psn = .999;
alpha*BP[2]; If Psn < -.999 Then Psn = -.999;
Lead = (Length / 6.28318)*(BP - BP[1]); Fish = .5*Log((1 + Psn) / (1 - Psn));
If MarketPosition <= 0 and Lead Crosses Over BP Then Buy Next Bar on If MarketPosition >= 0 and Fish Crosses Over UpThresh Then Sell Short
Open; Next Bar on Open;
If MarketPosition >= 0 and Lead Crosses Under BP Then Sell Short Next If MarketPosition <= 0 and Fish Crosses Under DnThresh Then Buy Next
Bar on Open; Bar on Open;
End; End;
Appendix F
EasyLanguage™ Code for Generic RSI Trading Strategy
Inputs:
Length(7), UpThresh(78), DnThresh(36), Rev(1.3);
Vars:
CU(0), CD(0), I(0), MyRSI(0), Psn(0);
If MarketPosition >= 0 and Psn Crosses Over UpThresh Then Sell Short
Next Bar on Open;
If MarketPosition <= 0 and Psn Crosses Under DnThresh Then Buy Next
Bar on Open;
//Disaster reversal
If MarketPosition = 1 and Low < (1 - Rev/100)*EntryPrice Then Sell Short
Next Bar on Open;
If MarketPosition = -1 and High > (1 + Rev/100)*EntryPrice Then Buy Next
Bar on Open;
End;
Bibliography
Arthur A. Merrill, “ Filtered Waves,” The Analysis Press, Chappaqua, NY 1977
MCS Pro, Inside Edge Systems, Bill Brower, 200 Broad Street, Stamford, CT 06901
www.eminiz.com, Corona Charts
Jonathan Y. Stein, “Digital Signal Processing,” John Wiley & Sons, New York, 2000
Perry J. Kaufman, “New Trading Systems and Methods,” John Wiley & Sons, New York, 2005
4
Introduction
Academic finance is replete with studies supporting or denying the existence of serial correlation in securities prices.1 In effect, such studies test the weak form
efficient market hypothesis (EMH). Simply put, can investors use technical analysis to beat the market?
Before an attempt is made to answer that question, it is necessary to define “the market.” For the purposes of this paper, “the market” is the constituent stocks
of the S&P 500 Index. The S&P 500 Index, after all, is probably the most widely recognized market proxy and in practice, investors index billions of dollars to it.
S&P 500 stocks are liquid and extensively researched by a multitude of technical and fundamental analysts. Consequently, one might expect that these stocks would
represent a highly efficient segment of the stock market.
Results
Table 1 presents the results of our back tests on the custom sample described previously. The first column represents a buy and hold strategy on the S&P 500
price index over the backtest period. The second column tests our proposed strategy, %b BW. The final column tests the %b BS Strategy.
*A similar back test of %b BW on our initial biased sample (the current S&P 500 constituents) generated annualized total return of 31.6%; %b BS generated
annualized total return of 3.2%. We provide this information as an example of the potential effects of sample bias on reported system performance. Also,
note that the pronounced difference in total return between the two strategies appears to be relatively consistent regardless of the sample.
Figure 3. BW (Buy-Weakness)
*We calculate the alpha depicted in Figure 3 differently than the alpha reported in Table 1. The alpha in Figure 3 is a linear regression estimate modeled as Strategy Return = alpha + Beta (Return on the S&P 500). The alpha
reported in Table 1 is the annualized mean difference of paired comparisons on 4287 observations of daily returns on the S&P 500 versus a unit of equity in the % BW strategy.
Figure 4.
Discussion
The evidence supports our thesis that a rotational trading algorithm using relative %b rankings can select stock portfolios that beat the risk-adjusted return on
the S&P 500. Moreover, those portfolios consist only of S&P 500 constituent stocks. For perspective, a search of the expansive Morningstar mutual fund database
in February 2007 reveals that just three mutual funds had an annualized rate of return in excess of 18% over the past fifteen years. None of those returns exceeded
19%. The %b BW Strategy* returned 24.1% annualized with surprisingly little risk relative to the benchmark. The charts in Figures 3 and 4 as well as the Sharpe
and Information Ratios reported in Table 1 provide the relevant risk assessment analytics.
*The historical performance of a simulated trading strategy is not a guarantee of future returns.
Admittedly, we have presented back test results that fly in the face of the well-worn trader’s axiom “Cut your losses short; let your profits run.” Table 2 confirms
that the %b BW system offers no protection against ruinous losses at the asset level. The diversification of an equal-weight 40 stock portfolio afford the only down
side protection, a striking demonstration of the critical importance of position sizing and diversification in system development.
While our results are statistically significant, the economic significance is less straightforward. The system trades frequently averaging over three trades per
day. For taxable investors, returns would be taxed 100% as unfavorable short-term capital gains. From Table 1 we see the average profit per trade is 1.2% net of an
assumed .2% round trip transaction costs. That is likely satisfactory only for a trader using an efficient broker9 and perhaps more importantly, trade size must be
sufficiently small to have only a modest impact on market prices. Assessing potential slippage10 is clearly an important consideration when evaluating any system.
Finally, the results suggest that investors overreact, possibly to news or changing prices, in a three-month (65- trading day) frame of reference. By design, our
indicator look-back period corresponds with the three-month earnings report cycle for stocks as well as the performance reporting cycle for many asset managers
capturing possible earnings-announcement and window dressing11 effects.
H. Parker Evans, CFA, CFP, CMT is a Vice President and Senior Portfolio Manager with
Fifth Third Private Bank in Clearwater, Florida. Parker has over twenty years experience
as a professional investment advisor. He has a penchant for technical analysis of alpha
persistence and mean reversion in securities prices.
5
Goichi Hosoda, invented the cloud charts, or Ichimoku Kinko Hyo charts, in Japan before World War II. The method uses moving averages based on the middle
of the range over a period of time, then shifts the lines, in the past and in the future.
In this paper, we will compare hypothetical trading results in some US commodity futures markets, when using the base moving average crossover, with a few
combinations of the different filters provided by the method.
I Description/Overview
Description/Overview of the cloud lines, and basic trade signals derived from these lines.
I-A Overview
A newspaper writer, Goichi Hosoda, invented the cloud charts, or Ichimoku Kinko Hyo charts, in Japan before World War II. The various lines are built from the
middle of the range over different periods, with some of the lines shifted in the future. One more line is made using the close, plot in the past.
Two of the lines are projected forward. The cloud is formed by the space between those two lines. As it is drawn in the future, it provides a unique, visual idea of
support and resistance in the future, not available in other techniques.
This paper focuses on the five basic lines of the cloud chart, which are readily available in many charting systems. Using back testing, the author compares
hypothetical results of trading systems based on the basic crossover in the method, using various combinations of the five lines, as added trade entry and/or exit filters.
Hosoda’s original definitions were based on a six-day working week in Japan when he developed the method (which included more than the cloud charts presented
below). As the author has adapted the cloud charts to a five-day working week in daily use, all the tests are based on the five-day working week assumption, except the
last one, which uses the six-day working week.
Chikou Span/Lagging Span: Today’s close, plotted 22 trading days behind. (The period is changed to 26 trading days, when operating in a six-day working week
environment.)
The position of the Chikou Span relative to prices gives an idea of market strength: when the Chikou Span is above the market prices, it is an indication of market
strength (and vice versa for weakness). In other terms, prices 22 (26) days ago are relevant, and represent current support/resistance. (see figure 2)
Figure 2: Chikou Span on the Crude Oil, May 2006 contract. The point marked (1) on the chart
shows a point where Chikou Span crosses prices.
Senkou Span A: (Tenkan-Sen + Kijun-Sen)/2, plotted 22 trading days ahead. (The period is changed to 26 trading days, when assuming a six-day working week.)
Senkou Span B - (Highest high + lowest low)/2, for the past 44 days, plotted 22 days ahead. (The period is changed to 56 trading days, plotted 26 days ahead,
in the case of a six-day working week.)
The area between Senkou Span A and Senkou Span B is colored and represents “the cloud.” (see figure 3) It represents key support (if the cloud sits below prices)
or resistance (if the cloud sits above prices).
Figure 3: Senkou Span A and B on the Crude Oil, May 2006 contract. The cloud is the colored
area between both lines.
If the market is under a declining cloud, the current downtrend has a better chance to continue.
A very thin area in the cloud is a point of vulnerability of the current trend: with both lines close to each other, only a small move will be needed for the market
to cross the cloud. (see figure 5)
Figure 5: Natural Gas, February 2008 contract. The points marked (1) are areas where the cloud is very thin.
The position of the Kijun Sen/Tenkan Sen crossover (see figure 1) relative to the cloud is significant. For example, a bearish crossover is a weak signal if it
happens above the cloud (i.e. above significant support), normal if it happens inside the cloud, and strong if it happens below the cloud (see figure 6). The latter can
be interpreted as an attempt to capture resumptions of the major trend, after short-term counter-trend corrective moves (and vice versa for buy signals).
Figure 6: Crude oil, February 2008 contract. The first Kijun Sen/Tenkan Sen crossover is above the cloud, which
is a weak sell-signal. The second crossover is right on the upper cloud line, making it normal/strong.
Figure 7. The cloud and Fibonacci retracements. Nymex Natural Gas, September 2007: note how both the cloud
and the 50% retracement stopped the correction.
II - Tests
Testing Methodology:
We used US commodity futures contacts. The ending date for all tests was October 3, 2007. For all contracts, we used equalized continuations, going back 1,000
days from October 3, 2007. The results are theoretical, as the impact of spreads is removed. This can be substantial in commodities. However, this was necessary to
provide a sufficient number of trades for comparison purposes. For simplification, we used continuations based on trading activity. The rollover from one contract to
the next is made at the time of higher tick activity in the next contract. The adjustment (which removes the spread) is the difference between the two contracts at the
time of rollover. The tests assumed no slippage, and no transaction costs were included. (see table 1)
II-A
Trade entry on Kijun Sen/Tanken Sen crossover, with no other condition. Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition: this signal
is named “tk1” in this paper. (see figure 8)
Figure 8: Tk1, Example on Copper futures. (The indicators below the price chart “tk_sht1” and “TK_Lg1” represent the total
profit as a line, and the closed profit as a histogram, for respectively the short system and the long system.)
The net profits are: $146,533 for the long and $-57,191 for the short. The total profit, $89,342 was the highest of the six cases studied. However, this is
immediately mitigated by the fact the total loss on the short side is the second largest of the twelve cases studied, and this method also has the highest maximum
drawdown, at $352,000. This illustrates that this method would benefit from filters, aiming to reduce risk and preserve capital.
Not surprisingly, this method also has more than/or very close to (for one case) double the number of the trades as the other methods.
II-B
Trade entry on Kijun Sen/Tanken Sen crossover, adding both the Chikou Span and the cloud as filters. Exit when either condition no longer fulfilled. This signal
is named “tk2” in this paper. (see figure 9)
Figure 9. Tk2: example of the short signal on Copper futures. (The indicator below the price chart “tk_sht2” represents the
total profit (green) or loss (red) as a line, and the closed profit (green) or loss (red) as a histogram, for the short system.)
II-C
Trade entry on Kijun Sen/Tanken Sen crossover, adding the position relative to the cloud as filter. (above the cloud for buy, under the cloud for sell) Exit on
reverse Kijun Sen/Tenkan Sen crossover, with no other condition. This signal is named “tk 3” in this paper. (see figure 10)
What is the impact of delaying the entry until the cloud confirms the outlook?
Figure 10: Three examples of unfavorable long trades on tk3, on copper futures. The filter based on the close and not the
position of the crossover relative to the cloud results in whipsaws on this kind of situation.
(The indicators below the price chart “tk_sht3”and “tk_lgt3” represent the total profit (green) or loss (red) as a line, and the
closed profit (green) or loss (red) as a histogram, for respectively the short system and the long system)
We can think of this system as attempting to capture the resumption of established medium-term trends after the end of short-term counter-trend corrections.
This method provides the second highest total net profit. This is coming from the highest average profit of all six methods, the second highest percentage of
winning trades. The number of trades is less than half the number of trades in the first test, “tk1”, which did not have use the cloud nor the Chikou Span as entry
filter, and this reduction was quite beneficial.
Despite the entry filter, the average trade duration is the second highest of all six methods. The average loss is also the second highest, which is impacting the
overall results of this method substantially. A comparison with the second test, “tk2” immediately above, suggests that this method would strongly benefit from
earlier exit of losing trades.
Separately, the trade system used the position of the daily settlement relative to the cloud as the trade entry filter, not the position of the crossover relative to
the cloud, which would have been a stronger filter. This was chosen for an easier calculation. While it may not seem like a substantial difference, figure 10 on the
previous page illustrates that trades are entered in sideways markets, where the cloud is not at its best performance. When the cloud is thin, and the crossover occurs
either inside or under the cloud, strength towards the close can result in a close above the cloud, and longs are entered. The resulting losses remain small, but without
these, tk3 would have had higher total profits in this test.
II-D
Trade entry on Kijun Sen/Tanken Sen crossover, adding the position relative to the cloud as filter. (under the cloud for buy, above the cloud for sell) Exit on
reverse Kijun Sen/Tenkan Sen crossover, with no other condition. This signal is named ”tk4” in this paper. (see figure 11)
What is the impact of an aggressive entry, attempting to capture the move early in the trend?
This resulted in the lowest total net profit in this test (see table 1), with a low percentage of winners. However, closer examination reveals that this system ranked
fifth -fairly low- for trade count, trade duration, average loss and maximum drawdown, and ranked in second for average profit, which are all potentially encouraging
results.
The Achilles heel of this system is that short-term corrections result in trade exits, but if the longer term trend is still up, the trade would not necessarily be re-
entered. (For example, if the market is no longer under the cloud, but has risen inside, or above the cloud for long positions, and vice versa for short positions.) A
substantial part of the trend, the later part, when it is confirmed, is missed. Other re-entry conditions could be considered in a more complex system.
II-E
Trade entry on Kijun Sen/Tanken Sen crossover, adding the Chikou Span as filter. (Chikou Span above prices for buy, and under prices for sell). Exit on reverse
Kijun Sen/Tenkan Sen crossover, with no other condition. This signal is named ”tk5” in this paper. (see figure 12)
Figure 12: Tk5, example on Copper futures (The indicators below the price chart “tk_sht5” and “tk_lgt5” represent the total
profit (green) or loss (red) as a line, and the closed profit (green) or loss (red) as a histogram, for respectively the short system
and the long system)
This method produced middle of the road results, and had average scores. Its weak points were the fairly low average profit and the high count.
Worse, this system had the unwanted privilege of having the highest average loss, the lowest average profit, and the lowest percentage of winning trades of all
systems in our sample.
III - General Conslusion
The cloud system is typically used as a visual method by the analyst, and has shown to be a worthwhile analytical tool. The author likes especially that the cloud
method uses the entire range, as opposed to the market closing price, the default source on most typical Western indicators. As such, cloud charts are one way to
diversify the data used. However, there are times when the cloud chart is of little use, other than highlighting the lack of medium-term trend. Indeed, this is a trend
following system, and as such, like with a typical Western moving average crossover system, a medium-term trend needs to exist. In medium-term sideways markets,
the analyst will immediately “see” that the cloud is a tangled mess of lines, and will switch to another method. This would need to be somehow replicated in trading
systems.
When considering building a trade system with this method, the author recommends testing with the addition of a trend indicator, like an ADX for example.
Further, having tighter exit conditions (as in tk2) dramatically reduced the maximum drawdown for both longs and shorts, making it the best method in our sample,
in terms of risk management and capital preservation, and the author strongly advocates that any further testing be made with tighter exit signals than the simple
reverse Kijun Sen/Tenkan Sen crossover.
References
Elliott, Nicole and Harada, Yuichiro, April 2001, Market Technician, Issue 40, Society of Technical Analysts
Elliott, Nicole, Option Strategies designed around Ichimoku Kinko Hyo Clouds, October 2002, International Federation of Technical Analysts
Muranaka, Ken, 2000, Ichimoku charts, Technical Analysis of Stocks and Commodities
Nippon Technical Analysis Association, 1989, Analysis of Stock Prices in Japan
Disclaimer
The opinions, views and forecasts expressed in this report reflect the personal views of the author(s) and do not necessarily reflect the views of Newedge USA, LLC or any other
branch or subsidiary of Newedge Group (collectively, “Newedge”). Newedge, its Affiliates, any of their employees may, from time to time, have transactions and positions in, make a
market in or effect transactions in any investment or related investment covered by this report. Newedge makes no representation or warranty regarding the correctness of any information
contained herein, or the appropriateness of any transaction for any person. Nothing herein shall be construed as a recommendation to buy or sell any financial instrument or security.
Véronique Lashinski, CMT is a Vice President, Senior Research Analyst with Newedge
USA, LLC, and is responsible for producing fundamental and technical analysis on
commodity futures.
Introduction
Prior research has shown the ability of various momentum strategies to generate excess returns at the firm, industry, and country level, but little research has been
done using investment style data at the index level. (Swinkels [2004] provides an informative survey of the momentum literature.) Our paper extends this literature
by examining whether momentum extends to Russell style indexes. This contribution is meaningful because it provides a diversified, index-based low-cost trading
strategy to exploit such momentum.
At the firm level, Lewellen [2002] shows that stocks partitioned based on size and book-to-market ratio exhibit momentum as strong as that in individual stocks
and industries. Also, Chen and De Bondt [2004] provide evidence of style momentum within the S&P-500 index by constructing portfolios based on style criteria.
However, constructing such portfolios can be costly, significantly eroding returns. Our focus is on indexes easily represented by exchanged traded funds, thereby
producing a significant cost advantage and providing a low expense, diversified means to exploit style momentum by incorporating relative style index performance
into tactical allocation strategies.
Using Russell Large-Cap and Small-Cap style index data, Arshanapalli, Switzer, and Panju [2007] develop a market timing strategy using a multinomial timing
model based on macroeconomic and fundamental public information. While their multinomial model does include prior market return variables to time their style
index allocation decisions, their paper is significantly different from ours in several ways. First, they do not focus on the importance on style index momentum, nor
do they discuss the significance of the prior market return variables in their model. Secondly, the beauty of our market-timing strategy is its simplicity. Only the
raw, prior return of the Russell style indexes is required to make the asset allocation decision. In contrast, Arshanapalli, Switzer, and Panju [2007] require variables
such as the Change in the Conference Board Consumer Confidence Index, U.S. Bond Default Premium, U.S. Bond Horizon Premium, S&P 500 Earnings Yield Gap,
Change in the Consumer Price Index, etc. to construct their model. Further, their model requires generating conditional probabilities using a multinomial logit and the
assigning of arbitrary cutoff probabilities when constructing trading rules. Additionally, Arshanapalli, Switzer, and Panju [2007] do not analyze short or long minus
short portfolios, nor do they consider Russell Mid-Cap Value/Growth portfolios. Lastly, the vast majority of their analysis covers a shorter time period, 1979-2000
versus 1972-2005, which fails to include the two most severe, post-War World II market declines, specifically the 1973-1974 and 2000-2002 crashes. The inclusion
of those time periods further verifies the robustness of our analysis.
We are also motivated to test the existence of style index momentum due to the proliferation of style index benchmarks. Both Lipper and Morningstar use style
benchmarks to rate mutual fund performance. Our decision to focus specifically on Russell style indexes was influenced by their popularity, as of 2006, 54.5% of
institutionally managed U.S. equity funds (over $3.8 trillion in assets) were benchmarked against Russell indexes.1
Furthermore, Barberis and Shleifer [2003] provide a theoretical basis for our analysis. They model an economy with fundamental traders and positive feedback
traders that chase relative style returns. The results being that “[p]rices deviate substantially from fundamental values as styles become popular or unpopular” (p.
190). Our results, using Russell style index data, provide additional support for their model.
Results are generally positive and statistically significant, especially for the shorter holding periods. Long-Short returns across various formation periods peak at
12 months of prior performance. Across various holding periods the Long-Short returns peak at 1 month. Therefore, top performing Long-Short portfolio was 12,1
with an average monthly return of 0.85% (p-value < 1%). Based on these results, the remainder of the paper focuses on portfolios composed of one style index with
a 12 month formation and one month holding period. While this 12,1 portfolio was the highest performer for the 34 year period, when various time periods were
analyzed it was not always the top performer4. However, the top performing strategy was consistently driven by medium-term momentum with prior performance in
the 8 to 14 month range.
Exhibit 2 shows the monthly level and persistence of the return outperformance and underperformance for the six portfolios selected based on the ranking of their
prior 12 month performance relative to the average of the six Russell indexes. The average monthly return is presented for each of the 12 months of the formation
period and for 36 months after each style index is ranked. For the top and bottom ranked portfolios the average cumulative 12 month prior return was 28.10% and
1.12%, respectively. For the first month of the holding period the average return for the top and bottom portfolios was 1.57% and 0.68%, respectively.
All portfolios revert back to the mean, but the portfolio with greatest (lowest) prior 12 month relative performance exhibits the greatest outperformance
(underperformance) persistence. This persistence is particularly pronounced for the top style index ranked by 12 month formation period performance, which
continues to outperform all other portfolios for 14 months. Also, the top and bottom ranked portfolios have the greatest spread between portfolio performance and
the average index return in the first month of the holding period, which is consistent with the results in Exhibit 1.
Using Fama-French 3-factor models we further analyze the top, bottom, and Long-Short 12,1 portfolio returns over the 34 year period5. Exhibit 4 reports a
monthly alpha of 0.53% (6.60% annualized) for the top 12,1 portfolio and -0.41% (-4.81% annualized) for the bottom 12,1 portfolio, both statistically significant
with p-values < 1%. The Long-Short portfolio produced a monthly alpha of 0.45% (5.56% annualized) which was statistically significant at the 5% level. These
results again provide evidence of momentum in style indexes even after controlling for market, size, and book-to-market factors.
Conclusion
Style index momentum is particularly interesting since it provides a diversified, low-cost trading strategy to exploit it. This inexpensive and diversified option
provides the opportunity for money managers, regardless of the amount of assets under management, to include such strategy into their tactical asset allocation
decisions. Such style index momentum trading strategies have outperformed on both a raw and risk-adjusted return basis, with the long minus short portfolio
generating an average 9.25% annual return over the 34-year period analyzed. Although the excess returns vary, they are robust through time and after controlling for
potentially confounding effects.
Where the dependent variable (Ri,t-Rrf, t) is the 12,1 portfolio return minus the one month Treasury Bill rate, Rm,t-Rrf,t, is the market factor (CRSP value-weighted
index minus the one month Treasury Bill rate), SMB (small minus big) is the size factor, and HML (high minus low) is the book-to-market factor. The αi represents
the 12,1 portfolio return in excess of the one-month Treasury Bill rate not explained by the risk factors in the model.
Endnotes
1 Russell indexes Rank #1 as Institutional Benchmarks, http://www.russell.com/news/Press_Releases/PR20060629_US_p.asp
2 We would like to thank Jason Karceski for providing us with the constructed index data from January 1969 to December 1996 used in Chan, Karceski, and
Lakonishok [2000].
3 We analyzed holding multiple indexes simultaneously, but only single index portfolios are reported due to larger momentum and significance relative to
multiple index holdings.
4 We evaluated all formation periods from -36 to -1 months and holding periods from +1 to +36 months. However, for brevity we only report months at common
breakpoints.
5 We would like to thank Kenneth French for providing HML and SMB factor data on his website, http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/
data_library.html
References
Arshanapalli, Bala G., Lorne N. Switzer, and Karim Panju. 2007. “Equity-Style Timing: A Multi-Style Rotation Model for the Russell Large-Cap and Small-Cap
Growth and Value Style Indexes.” Journal of Asset Management, Vol. 8: 9–23
Barberis, Nicholas, Andrei Shleifer, and Robert Vishny. 1998. “A Model of Investor Sentiment.” Journal of Financial Economics, Vol. 49, No. 3 (September):
307–343
Chan, Louis K.C., Jason Karceski, and Josef Lakonishok. 2000. “New Paradigm or Same Old Hype in Equity Investing?” Financial Analysts Journal, Vol. 56, No.
4 (July/August): 23–36
Chen, Hsiu-Lang, and Werner De Bondt. 2004. “Style Momentum Within the S&P-500 Index.” Journal of Empirical Finance, Vol. 11, No. 4 (September):
483–507
Lewellen, Jonathan. 2002. “Momentum and Autocorrelation in Stock Returns.” The Review of Financial Studies, Vol. 15, No. 2, (Special Issue: Conference on
Market Frictions and Behavioral Finance): 533–563
Swinkels, Laurens. 2004. “Momentum Investing: A Survey.” Journal of Asset Management, Vol. 5, No. 2: 120–143
William DeShurko, CFP is the author of the personal finance book, The Naked Truth about
Your Money, and has owned his own investment management firm since 1993.
Samuel Benner
Panic
Panics in the commercial and financial world have been compared to comets in the astronomical world. It has been said of comets that they have no regularity of
movement, no cycles, and that their movements are beyond the domain of astronomical science to find out. However, the writer claims that Commercial Revulsions
in this country, which are attended with financial panics, can be predicted with much certainty; and the prediction in this book, of a commercial revolution and
financial crisis in 1891 is based upon the inevitable cycle which is ever true to the laws of trade, as affected and ruled by the operations of the laws of natural
causes.
The panic of 1873 was a commercial revolution; our paper money was not based upon specie, and banks only suspended currency payments for a time in this crisis.
As it is not in the nature of things in succeeding cycles to operate in the same manner, the writer claims that the “signs of the times” indicate that the coming
predicted disturbance in the business world will be not only an agricultural, manufacturing, mining, trading, and industrial revulsion, but also a financial catastrophe,
producing a universal suspension of payments and bank closures.
It is not necessary to give a detailed account of the effects of disorderly banking in our colonial and revolutionary history, and the different panics prior to the
war of 1812, to establish cycles in commerce and finance.
Such a history would fill many pages without answering the purpose of this book, and would be as intricate and difficult to understand as the prices of stocks and
gold in Wall Street.
1
[Editor Note … Benner’s use of the word depression means bear market trend or economic contraction. The 1819 bank collapse from the cost of war, excess currency in circulation, and money moving
out of the country is well defined. Benner’s economic forecast for a recession in 1891, though written in 1884, is only off a year. The true brilliance is Benner’s cyclical analysis work throughout this book.
Benner’s book was first published in 1875 and is widely viewed as the first market analysis book written in North America. This excerpt begins on page 96 after a detailed study of high to low price cycles
in Steel, Hogs, Corn, and Cotton. Long term cycle analysts of today’s markets will find Benner provides annual market data from 1821 in this book.]
2
[Ed. note… Giving Benner plus or minus a year he hit the cycle low and high forecast. In 1837, 1857, 1873, and 1893 a New York residential boom ended in a panic
bust where housing prices collapsed. New York’s housing busts were caused by recessions (or “panics”) in the national economy recorded to be in the years 1837, 1857,
1873, and 1892-3. The ad at right can be found at http://www.brownstoner.com/brownstoner/archives/2007/08/not_new_yorks_f.php]
The winning author will receive a cash prize of $4,000.00 and will be invited to
present their paper at a MTA seminar or chapter meeting. The paper or a summary
may be published in the MTA’s Journal of Technical Analysis, Technically Speaking
newsletter, and posted here. At the discretion of the judging panel, the authors of
runner-up papers will receive certificates.
The last day to submit papers is February 6, 2009, and the winner will be
selected on or before May 8, 2009. Submit inquiries to [email protected]. To
view the guidelines for all submissions, please visit the Dow Award page on the mta.
org website (click on Dow Award under the Activities drop down).
0.618
1.618