2.2 Vector Autoregression (VAR) : M y T 1, - . - , T
2.2 Vector Autoregression (VAR) : M y T 1, - . - , T
y1t
y2t
..
.
ymt
1
2
..
.
m
1t
2t
..
.
mt
(p)
11
(p)
21
..
.
(p)
m1
(1)
11
(1)
21
..
.
(1)
m1
(p)
12
(p)
22
..
.
(1)
m2
(1)
12
(1)
22
..
.
(1)
m2
...
...
(p)
1m
(p)
2m
..
.
(p)
mm
(1)
1m
(1)
2m
..
.
(1)
mm
y1,tp
y2,tp
..
.
ym,tp
(12)
10
y1,t1
y2,t1
..
.
ym,t1
In matrix notations
(13)yt = + 1yt1 + + pytp + t,
which can be further simplied by adopting
the matric form of a lag polynomial
(14)
(L) = I 1L . . . pLp.
(L)yt = t.
Note that in the above model each yit depends not only its own history but also on
other series history (cross dependencies). This
gives us several additional tools for analyzing
causal as well as feedback eects as we shall
see after a while.
11
E(t) = 0
E(ts) =
0
if t = s
if t =
s
12
13
VAR(2) Estimates:
Sample(adjusted): 1965:04 1995:12
Included observations: 369 after adjusting end points
Standard errors in ( ) & t-statistics in [ ]
===========================================================
DFTA
DDIV
DR20
DTBILL
===========================================================
DFTA(-1)
0.102018 -0.005389 -0.140021 -0.085696
(0.05407) (0.01280) (0.02838) (0.05338)
[1.88670][-0.42107] [-4.93432] [-1.60541]
DFTA(-2)
-0.170209 0.012231
0.014714
0.057226
(0.05564) (0.01317) (0.02920) (0.05493)
[-3.05895] [0.92869] [0.50389] [1.04180]
DDIV(-1)
-0.113741 0.035924
0.197934
0.280619
(0.22212) (0.05257) (0.11657) (0.21927)
[-0.51208] [0.68333] [1.69804] [1.27978]
DDIV(-2)
0.065178 0.103395
0.057329
0.165089
(0.22282) (0.05274) (0.11693) (0.21996)
[0.29252] [1.96055] [0.49026] [0.75053]
DR20(-1)
-0.359070 -0.003130
0.282760
0.373164
(0.11469) (0.02714) (0.06019) (0.11322)
[-3.13084] [-0.11530] [4.69797] [3.29596]
DR20(-2)
0.051323 -0.012058 -0.131182 -0.071333
(0.11295) (0.02673) (0.05928) (0.11151)
[0.45437][-0.45102] [-2.21300] [-0.63972]
DTBILL(-1)
0.068239 0.005752 -0.033665
0.232456
(0.06014) (0.01423) (0.03156) (0.05937)
[1.13472] [0.40412] [-1.06672] [3.91561]
DTBILL(-2)
-0.050220 0.023590
0.034734 -0.015863
(0.05902) (0.01397) (0.03098) (0.05827)
[-0.85082] [1.68858] [1.12132] [-0.27224]
C
0.892389 0.587148 -0.033749 -0.317976
(0.38128) (0.09024) (0.20010) (0.37640)
[2.34049] [6.50626] [-0.16867] [-0.84479]
===========================================================
14
Continues . . .
===========================================================
DFTA
DDIV
DR20
DTBILL
===========================================================
R-squared
0.057426
0.028885
0.156741 0.153126
Adj. R-squared
0.036480
0.007305
0.138002 0.134306
Sum sq. resids
13032.44
730.0689
3589.278 12700.62
S.E. equation
6.016746
1.424068
3.157565 5.939655
F-statistic
2.741619
1.338486
8.364390 8.136583
Log likelihood -1181.220 -649.4805
-943.3092 -1176.462
Akaike AIC
6.451058
3.569000
5.161567 6.425267
Schwarz SC
6.546443
3.664385
5.256953 6.520652
Mean dependent
0.788687
0.688433
0.052983 -0.013968
S.D. dependent
6.129588
1.429298
3.400942 6.383798
===========================================================
Determinant Residual Covariance
18711.41
Log Likelihood (d.f. adjusted)
-3909.259
Akaike Information Criteria
21.38352
Schwarz Criteria
21.76506
===========================================================
15
16
The underlying assumption is that the residuals follow a multivariate normal distribution,
i.e.
(18)
Nm(0, ).
AIC = 2 log L + 2s
(20)
17
The best tting model is the one that minimizes the criterion function.
For example in a VAR(j) model with m equations there are s = m(1 + jm) + m(m + 1)/2
estimated parameters.
18
jm2 log T
,
BIC(j) = log ,j +
and AIC to
(22)
2jm2
AIC(j) = log ,j +
,
j = 0, . . . , p, where
(23)
T
1
1
j
jE
,j =
t,j
t,j = E
T t=j+1
T
with
t,j the OLS residual vector of the VAR(j)
model (i.e. VAR model estimated with j
lags), and
(24)
j =
E
j+1,j ,
j+2,j , ,
T,j
an m (T j) matrix.
19
p|),
k | log |
LR = T (log |
H0 : k+1 = = p = 0
is true, then
(27)
LR 2
df ,
20
In order to investigate whether the VAR residuals are White Noise, the hypothesis to be
tested is
(28)
H0 : 1 = = h = 0
Qh = T
k
1
k
1),
tr(
0
0
k=1
k = (
ij (k)) are the estimated (residwhere
0 the contempoual) autocorrelations, and
raneous correlations of the residuals.
See
e.g. L
utkepohl, Helmut (1993). Introduction to
Multiple Time Series, 2nd Ed., Ch. 4.4
23
(30)
Qh = T 2
1
1
k
(T k)1 tr(
0 k 0 ).
k=1
24
i=1
iyti + t
j tj
j=1
or
(32)
(L)yt = + (L)t,
(L) = I 1L . . . pLp
(L) = I 1L . . . q Lq .
27
i,j (k) =
ij (k)
i(0)j (0)
(36)
1 (k)
1,2 (k) . . .
2,1(k) 2(k)
...
k =
.
...
..
m,1 (k) m,2 (k) . . .
1,m (k)
2,m (k)
.
...
m (k)
28
Remark 2.4: k = k .
29
Divt1
Ftat1
R20t1
Tblt1
Divt
0.0483
0.0160
0.0056
0.0536
Ftat
0.0099
0.1225
0.1403
0.0266
R20t
0.0566
0.2968
0.2889
0.1056
Tblt
0.0779
0.1620
0.3113
0.3275
30
French,
K. and R. Ross (1986). Stock return variances: The arrival of information and reaction of
traders. Journal of Financial Economics, 17, 526.
31
One explanation could be that the cross autocorrelations are positive, which can be partially proved as follows: Let
(37)
m
1
1
rit = rt
rpt =
m i=1
m
(38)
rt1 rt
1
,
cov(rpt1 , rpt) = cov
=
,
m
m
m2
32
Therefore
(39)
33