0% found this document useful (0 votes)
22 views16 pages

Slide 0

This document provides an outline for a course on econometrics of financial markets. It covers key probability concepts like distributions, expectations, and independence. It then discusses properties of time series analysis including stationarity. Key concepts are defined, such as joint, marginal, and conditional distributions, variance, skewness, kurtosis, and the natural filtration of a time series.

Uploaded by

yingdong liu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views16 pages

Slide 0

This document provides an outline for a course on econometrics of financial markets. It covers key probability concepts like distributions, expectations, and independence. It then discusses properties of time series analysis including stationarity. Key concepts are defined, such as joint, marginal, and conditional distributions, variance, skewness, kurtosis, and the natural filtration of a time series.

Uploaded by

yingdong liu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

ECMT3150: The Econometrics of Financial

Markets
0. Preliminaries

Simon Kwok
University of Sydney

Semester 1, 2022

1 / 16
Outline

1. Probability Concepts
1.1 Probability Distribution
1.2 Expectation and Moment
1.3 Joint, Marginal and Conditional Distribution
1.4 Conditional Expectation and Conditional Moment
1.5 Law of Iterated Expectations
1.6 Independence and Zero Correlation
1.7 Filtered Probability Space, Random Variable, Natural Filtration
2. Time Series
3. Properties of Time Series
3.1 Stationarity
3.2 Error Dynamics

2 / 16
Expectation

I The probability distribution of a random variable X can be


characterized by its probability mass function (pmf) pX (x ) if
X is discrete, or its probability density function (pdf) fX (x ) if
X is continuous.
I For a random variable X with support S,1 the expectation
E (X ) is its mean µX :

E (X ) = R∑x 2S xpX (x ) if X is discrete with pmf pX (x );


xf (x )dx if X is continuous with pdf fX (x ).
x 2S X

1 The support S covers all possible values of X . It is a …nite/countable set if


X is discrete, or a continuous interval if X is continuous.
3 / 16
Expectation

I The expectation of g (X ) (for some function g ) is de…ned


similarly:

E (g (X )) = R∑x 2S g (x )pX (x ) if X is discrete;


x 2S
g ( x ) f X ( x ) dx if X is continuous.

I Example:
I Setting g (X ) = X ` yields the `th moment of X : E (X ` ).
I Setting g (X ) = (X µX )` yields the `th central moment of
X : E [(X µX )` ].

4 / 16
Variance, Skewness, Kurtosis, Covariance
I The variance of X is de…ned as the second central moment of
X:
σ2X Var (X ) = E [(X E (X ))2 ].
I The skewness of X is de…ned by:

E [(X µX )3 ]
Skew (X ) = .
σ3X
I The kurtosis of X is de…ned by:

E [(X µX )4 ]
Kur (X ) = .
σ4X
I Given two random variables X and Y with …nite means, their
covariance is de…ned by:

Cov (X , Y ) = E [(X E (X ))(Y E (Y ))].


5 / 16
Joint, Marginal and Conditional Distribution
I The joint probability distribution of two random variables X
and Y is characterized by:

joint pmf pX ,Y (x, y ) if discrete;


joint pdf fX ,Y (x, y ) if continuous.

I Let SY be the support of Y . The marginal distribution of X is


characterized by:

pX (x ) =R∑y 2SY pX ,Y (x, y ) if discrete;


fX (x ) = y 2SY fX ,Y (x, y )dy if continuous.

I The conditional distribution of Y given X is characterized by:


p X ,Y (x ,y )
conditional pmf pY jX (y jx ) = p X (x )
if discrete;
fX ,Y (x ,y )
conditional pdf fY jX (y jx ) = f X (x )
if continuous.

6 / 16
Conditional Expectation, Law of Iterated Expectations
I The conditional expectation of Y given X is

E (Y jX = x ) = R∑y 2SY ypY jX (y jx ) if discrete;


y 2S Y
yf Y jX ( y j x ) dy if continuous.

I The conditional expectation of h(Y ) (for some function h)


given X is de…ned similarly:

E (h(Y )jX = x ) = R∑y 2SY h(y )pY jX (y jx ) if discrete;


y 2S Y
h(y )fY jX (y jx )dy if continuous.

I Provided that the mean of Y is …nite, the law of iterated


expectations holds:

E (Y ) = E [E (Y jX )].

This relation still holds if X and Y are random elements (e.g.,


random vectors).
7 / 16
Independence

I Two random variables X and Y are independent if


P (X 2 A, Y 2 B ) = P (X 2 A)P (Y 2 B ) for all subsets A, B
in R.
I Equivalently, this can be characterized using the joint,
conditional and marginal pmf/pdf: for all x 2 SX , y 2 SY ,

pX ,Y (x, y ) = pX (x )pY (y ) or pY jX (y jx ) = pY (y ) if discrete;


fX ,Y (x, y ) = fX (x )fY (y ) or fY jX (y jx ) = fY (y ) if continuous.

I X1 , X2 , . . . , Xn are jointly independent if


P (X1 2 A1 , . . . , Xn 2 An ) = P (X1 2 A1 ) P (Xn 2 An )
for all subsets A1 , . . . , An in R.

8 / 16
Independence vs Zero Correlation

I Independence implies zero correlation, but not vice versa.


I Suppose X and Y are uncorrelated, i.e.,
Cov (X , Y ) = E [(X E (X ))(Y E (Y ))] = 0. It follows
that
E (XY ) = E (X )E (Y ).
I If X and Y are independent, it actually requires that

E (g (X )h(Y )) = E (g (X ))E (h(Y ))

for all functions g ( ) and h( ), not just the identity function.


In general, this is a stronger requirement than zero correlation.
I A special case in which independence , zero correlation:
I If (X , Y ) follows a bivariate normal distribution with
Corr (X , Y ) = 0, then X and Y are independent.

9 / 16
Filtered Probability Space

I A …ltered probability space is characterized by (Ω, fFt g, P),


where
I Ω is an abstract “sample space” containing all outcomes and
events;
I fFt g is the “…ltration” - an expanding sequence of
information sets2 indexed by time (Fs Ft for s t).
I P is the probability measure that assigns a probability number
in [0,1] to all random events.
I A real-valued random variable X is a mapping from the
sample space Ω to the real line R. For some outcome ω 2 Ω,
X maps ω to some real number X (ω ).3

2 The information set Ft contains all possible random events in Ω that can
be resolved with information up to time t.
3 Similarly, a random vector is a mapping from the sample space Ω to the

k-dimensional Euclidean space Rk .


10 / 16
Natural Filtration

I The natural …ltration fFt g of a time series fyt g is the


…ltration generated by fyt g. For each t, Ft contains
information about the history of fyt g up to time t.
Mathematically, Ft is the “sigma algebra” generated by
yt , yt 1 , . . .. We write

Ft := σfyt , yt 1 , . . . g.

I For our purpose, it su¢ ces to treat Ft as containing the


knowledge of the values of yt , yt 1 , . . .. In particular, Ft may
contain all possible functions of yt , yt 1 , . . ..

11 / 16
Time Series

I A time series is a sequence of random variables indexed by


time.
I Examples: time series of IBM prices, IBM returns, DGP
growth rates, etc.
I Notations:

fyt gTt=1 = fy1 , y2 , . . . yT g


fyt gt∞=1 = fy1 , y2 , . . .g
fyt gt∞= ∞ = f. . . , y 2 , y 1 , y0 , y1 , y2 , . . .g

Sometimes we simply write it as fyt g.


I A multiple time series is a sequence of random vectors indexed
by time.

12 / 16
Stationarity - De…nitions

A time series process fyt g is:


I weakly (or covariance) stationary if E (yt ), Var (yt ) and
Cov (yt , yt j ) exist and are constant over time t, for all lags j.
I strictly (or strongly) stationary if the …nite dimensional
distributions of fyt1 , yt2 , . . . , ytd g and fyt1 +s , yt2 +s , . . . , ytd +s g
are the same for any given set of d time points
t1 < t2 < < td , any time shift s, and any positive integer
d.

13 / 16
Stationarity - Properties

1. Weak stationarity does not imply strict stationarity, since the


third and/or some higher order moments of the distribution of
yt may be time-varying.
2. If the mean and variance exist, then strict stationarity implies
weak stationarity.
3. In general, strict stationarity does not imply weak stationarity,
since strict stationary does not require the existence of mean
and variance. E.g., fyt g is i.i.d. Cauchy, in which case E (yt )
diverges to ∞.

14 / 16
Error Dynamics - De…nitions

I White noise: fεt g wn(0, σ2ε ) if E (εt ) = 0, Var (εt ) = σ2ε ,


Cov (εt , εt j ) = 0 for all t and j.
I Martingale di¤erence sequence: fεt g mds if E (εt jFs ) = 0
for all s < t, where Fs is the history of fεt g up to time s.
I Independent and identically distributed sequence: fεt g iid
if all elements in fεt g are jointly independent and have the
same distribution.

Economic interpretations:
I wn is linearly unpredictable. It su¢ ces to assume wn errors for
linear models.
I mds is unpredictable in conditional mean.
I iid is unpredictable, both linearly and non-linearly.

15 / 16
Error Dynamics - Properties
1. wn implies fεt g is homoskedastic (i.e., constant variance σ2ε ),
but mds only imposes restriction on the conditional mean.
2. The moments of mds and iid sequence may not exist (except
for the mean of mds).
3. If the …rst two moments exist, then iid ) mds ) wn.
4. wn ; mds. Counterexample: consider the nonlinear moving
average model:
ut = εt εt 1 εt 2 , f εt g iid (0, 1).
It is a wn but not an mds.
5. mds ; iid. Counterexample: consider the model:
ut = σ t εt , f εt g iid (0, 1),
σ2t = ω + αε2t 1 .
It is an mds but not an iid sequence.
6. If E (εt jFs ) is a linear projection (i.e. a linear function of
fεs , εs 1 , . . .g) and Var (εt ) is constant, then mds , wn.
16 / 16

You might also like