0% found this document useful (0 votes)
54 views

Lecture 2: Predictability of Asset Returns: T T+K T T T+K T+K

The document discusses random walk hypotheses and their application to predicting asset returns. It introduces three random walk models (RW1, RW2, RW3) that differ in their assumptions about the distributions of increments. RW1 assumes increments are independent and identically distributed (IID). RW2 relaxes the IID assumption, allowing independent but not identical distributions. RW3 further relaxes the independence assumption, requiring only that increments are uncorrelated. The document also discusses how random walks can be made stationary through differencing and describes tests of the RW1 model based on analyzing sequences and reversals of returns.

Uploaded by

intel6064
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Lecture 2: Predictability of Asset Returns: T T+K T T T+K T+K

The document discusses random walk hypotheses and their application to predicting asset returns. It introduces three random walk models (RW1, RW2, RW3) that differ in their assumptions about the distributions of increments. RW1 assumes increments are independent and identically distributed (IID). RW2 relaxes the IID assumption, allowing independent but not identical distributions. RW3 further relaxes the independence assumption, requiring only that increments are uncorrelated. The document also discusses how random walks can be made stationary through differencing and describes tests of the RW1 model based on analyzing sequences and reversals of returns.

Uploaded by

intel6064
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

A.

Fischer
SS 2008

Lecture 2: Predictability of Asset Returns

Random Walk Hypotheses, CLM 2.1


One of the biggest questions in finance is: are financial asset prices fore-
castable? Considerable resources and human effort (i.e., mathematicians,
economists, etc) have been invested in trying to “beat the market”. Others
in a less serious manner make a joke out of it. For example, the dart throwing
monkey versus the financial market pros contest reported in the Wall Street
Journal.
Random Walk and Martingale models will provide a benchmark against
other theoretical models. In the end it is not clear, whether the fundamentals
model outperforms the simple random walk model.
• Random Walk Hypothesis
Orthogonality condition of the RW hypothesis
cov[f (rt ), g(rt+k )] = E[(f (rt ) − µt )(g(rt+k ) − µt+k )] (1)
= 0
where f (·) and g(·) are two arbitrary functions. All random walk mod-
els and the martingale hypothesis have this property. If f (·) and g(·)
are linear, then (1) is equivalent to a RW 3. Else if f (·) is unrestricted
and g(·) is linear, then (1) is equivalent to a martingale model. Last,
if both are unrestricted, then we have a RW 1 and a RW 2.
• Martingale Model - “fair game”
The martingale model is defined by the following conditions:
E(Pt+1 |Pt , Pt−1 , · · ·) = Pt
E(Pt+1 − Pt |Pt , Pt−1 , · · ·) = 0
Tomorrow’s price is today’s price: the best forecast based on past prices
is today’s price. Two points: first, this definition says only something
about the mean (i.e., first moment) and nothing about the higher mo-
ments. Second, the orthogonality condition applies to non overlapping
price changes: this implies the ineffectiveness of all linear forecasting
rules for future price changes based on historical prices.
Samuelson defines the martingale model to be equal to weak-form ef-
ficiency. The necessary assumption is market efficiency, namely that

1
today’s price reflects all the information from past prices. Under this
old definition it says that it is not possible to make profits based on in-
formation from past asset prices. Newer definitions recognize that there
is a tradeoff between risk and expected returns. Hence, the information
from risk is captured in the distribution’s higher moments.
• ME Example #2.1: Hall’s Martingale with Consumption
Let zt be a vector containing a set of macroeconomic variables (such
as money supply, or GDP) including aggregate consumption ct for pe-
riod t. Hall’s (1979) martingale hypothesis is that consumption is a
martingale with respect to zt :
E(ct |zt−1 , zt−2 , · · · , z1 ) = ct−1
This formalizes the notion in consumption theory called “consumption
smoothing”: the consumer, wishing to avoid fluctuations in the stan-
dard of living, adjusts consumption in t − 1 to the level such that no
change in subsequent consumption is anticipated. See Hayashi page
101.
• ME Example #2.2: Naive Inflation Forecasts
Lets us define a simple naive forecast using the martingale model for
inflation. The naive forecast says that the best inflation forecast for
t + h, defined as πt+h|t , is the current observed inflation rate, πt :

E(πt+h |It ) = πt .
The naive forecast is a martingale if the information set It = {pt , pt−1 , · · ·}.
The naive forecast is frequently used as the simplest forecasting bench-
mark against ARIMA models or the Phillips curve model. Most studies
show that they have difficulty in beating the naive inflation forecast,
see Akteson and Ohanian (2000).

Random Walk 1: IID Increments

• RW1 Specification

pt = µ + pt−1 + t , t ∼ IID(0, σ 2 ). (2)

the drift term, µ, is interpreted as the expected change in prices and the
error term, , is independently and identically distributed (i.e., Normal,
Weibull, Exponential). Independence ⇒ cov(t , t−k ) = 0, the reverse

2
however is not true: Independence ⇐ cov(t , t−k ) = 0. If X1 and
X2 are independent random variables for every Borel function (i.e.,
function g(·) guarantee’s that r.v. xt is also a r.v. for yt = g(xt ))
h1 (·) and h2 (·) then E(h1 (X1 )h2 (X2 )) = E(h1 (X1 )) · E(h2 (X2 )). This
is both a necessary as well as a sufficient condition for independence.
One particular case of interest is when h1 (X1 ) = X1 and h2 (X2 ) = X2 ,
then under linear independence we have E(X1 X2 ) = E(X1 ) · E(X2 ).
From this we have cov(X1 , X2 ) = E(X1 X2 ) - E(X1 ) · E(X2 ). Here,
linear independence is equivalent to uncorrelatedness, since it implies
cov(X1 , X2 ) = 0
• RW1 model is stronger than the Martingale Model

Random Walk 2: Independent but not identically distributed


• RW2 Specification: Independent Increments
Over time the assumption of identically distributed increments is im-
plausible and therefore RW 2 relaxes this assumption. Clearly, RW 2
contains RW 1 as a special case.
pt = µ + pt−1 + t , t ∼ IN ID(0, σi2 ).
The unconditional variance is not constant: i.e., a Markov/threshold
model, which takes on two variances, i = 2, and σ12 6= σ22 .

Random Walk 3: Not Ident. and not indep. distributed


• RW3 Specification: Uncorrelated Increments
pt = µ + pt−1 + t , t ∼ N IN ID(0, σt2 ),

where cov(t , t−k ) = 0 and cov(2t , 2t−k ) 6= 0 for k 6= 0. RW 3 contains


RW 2 and RW 1 as special cases.
• An example of a RW 3 process is an ARCH(1) model
rt = pt − pt−1 = t , µ = 0,
2 2
t = α0 + α1 t−1 + νt .
We say this process has uncorrelated increments, but is clearly not
independent since its squared increments are correlated.

3
Random Walk, Time Dependency and Stationarity

pt = µ + pt−1 + t , t ∼ IID(0, σ 2 ).
Can be rewritten as E(pt |p0 ) = p0 + µt and V ar(pt |p0 ) = σ 2 t. To show
this, consider the following steps:

pt = µ + pt−1 + t
pt−1 = µ + pt−2 + t−1
..
.
p1 = µ + p0 + 1
Through repeated substitution
pt = µt + p0 + t + t−1 + · · · + 0 ,
with E(t ) = 0, one obtains
E(pt |p0 ) = µt + p0 .
Next, let us consider var(pt |p0 ):
var(pt |p0 ) = E(pt |p0 − E(pt |p0 ))E(pt |p0 − E(pt |p1 ))
= E(t + · · · + 1 )E(t + · · · + 1 )
with E(t )E(t−k ) = cov(t , t−k ) = 0 for k 6= 0 and E(t , t−k ) = σ 2 . This
gives
t
σ2,
X
var(pt |p0 ) =
i=1
2
= σ t.
The simple Random walk model has a time dependent mean and variance.
The time dependency of the moments makes it a non stationary process. To
obtain stationarity, or in other words non time dependence, we can first-
difference the RW process:
(1 − L)pt = ∆pt = t , with µ = 0,
This yields E(∆pt |∆p0 ) = 0 and var(∆pt |∆p0 ) = σ 2 . Note, there are other
forms of non stationarity that have nothing to do with Random Walk models;
i.e.
pt = µ + αt + t , t ∼ IID(0, σ 2 ).

4
In this simple model, we have a process that is non-stationary in mean,
but stationary in the variance. Note: Random walk models are are non
stationary, but not all forms of non stationary are random walk models.

Tests of Random Walk 1 with t ∼ IID, CLM 2.1

• Sequences and Reversals


Consider equation (2) without drift, i.e., µ = 0 and the indicator vari-
able, It , defined as follows:


 1 if rt = pt − pt−1 > 0


It = 
 0 if rt = pt − pt−1 ≤ 0
 

As in the Bernoulli “coin-toss” game, It , indicates whether the com-


pounding continuously return, rt is positive or negative. The Cowles
Jones Test is a comparison of the frequency of sequences and reversals
in asset returns. Sequences are defined as pairs of consecutive returns
with the same sign (i.e., It = 1,1,1 or It = 0,0,0), whereas reversals are
pairs of returns with the opposite returns (i.e. It = 1,0,1,0). Given a
sample of n+1 returns r1 , r2 , · · ·, rn+1 , the number of sequences Ns
and reversals Nr may be expressed as simple functions of the It0 s

n
X
Ns ≡ Yt , Yt ≡ It It+1 + (1 − It )(1 − It+1 )
t=1
Nr ≡ n − Ns

Note: we are generating n transformations with It for a sample of


n+1 returns.

5
Example 1: Defining the Number Sequences Ns

Return rt Indicator It Sequence Sum Yt

r1 0.1 I1 1 Y1 0

r2 -0.5 I2 0 Y2 1

r3 -0.7 I3 0 Y3 0

r4 0.3 I4 1 Y4 0

r5 ?

• Cowles Jones Statistic without drift


If log prices follow a driftless IID RW1 process and if the distribution
of t is symmetric, then whether rt is positive or negative should be
equally likely. This is similar to a fair coin-toss with probability one-
half of either outcome. This implies that for any pair of consecutive
returns, a sequence and reversal are equally probable; hence the Cowles-
Jones ratio CJ ˆ ≡ Ns /Nr should be equal to one. More formally, this
ratio may be interpreted as a consistent estimator of the ratio CJ of
the probability πs of a sequence to the probability of a reversal 1-πs
since:

ˆ ≡ Ns = Ns /N = πˆs →
CJ
pr πs
= CJ =
1/2
=1 (3)
Nr Nr /N 1 − πˆs 1 − πs 1/2

Some Statistical Concepts


Before we can figure out the case of CJ with drift we need some statistics.
• More Distribution Theory
Bernoulli Distribution:
E(It ) = p = π,
var(It ) = pq = π(1 − π)

6
Binomial Distribution:
E(Ns ) = np = nπ
var(Ns ) = npq = nπ(1 − π)

where Ns = N Y and Yt = It It+1 + (1 − It )(1 − It+1 ), thus we have


P
PNi=1 t
Binomial = i=1 Bernoulli.
• Variance Transformations with T
This transformation is used frequently in econometrics. It says

(θ̂ − θ0 ) ∼ N (0, V )
1/T (θ̂ − θ0 ) ∼ N (0/T, V /T 2 ) ∼ N (0, V /T 2 )

• Cumulative Normal Distribution


Let X be a random variable. The point function F (·), a function from
a point to a point, R → [0, 1] defined by F (·) = P r(X ≤ x) for all x
∈ R is called the distribution function of X and satisfies the following
properties:
(i) F (·) is non decreasing
(ii) F (−∞) = 0 and F (∞) = 1
(iii) F (·) is continuous from the right. Next, let F (·) be the distribution
function of the random variable X. The non negative function f (x)
defined by
Z x
F (x) = f (u)du, ∀ ∈ R continuous
−∞

or
X
F (x) = f (u), ∀ ∈ R discrete
u≤x

is said to be the probability density function of X.


Example: coin tossing f (0) = 1/4, f (1) = 1/2, and f (2) = 1/4. Draw
for F (·) and f (·).
Example: consider the case where X takes the values in the interval
[a, b] and all values of X are attributed the same probability; we express
this by saying that X is uniformly distributed in the interval [a, b] and
we write X ∼ U (a, b).
The distribution of X takes the form

7


0 x<a








F (x) = x−a
a≤x≤b

 b−a




 1 otherwise

The corresponding density function of X is given by



1

a≤x≤b


b−a

f (x) = 

 0

otherwise

Draw for F (·) and f (·).


Next, define the normal distribution and density function as
Z x 1 Z x
−t2 /2
Φ(x) = e dt = φ(t)dt
−∞ (2π)0.5 −∞

2
where x = µ/σ and the normal density function is φ(x) = 1
(2π)0.5
e−t /2 .

You might also like