EViews

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

EViews

LOAD DATA
• Click on EViews icon.
• Click on File, New, Workfile.
• In the dialog box specify annual data, and put 1871 2000 in the boxes for
beginning and end.
• The file is Shiller.xls in G:ems data; eviews courses is also on Ron Smith’s
home page.
• You will then get a box with two variables, C for the constant and Resid for
the residuals.
• Choose File, Import, Read Text Lotus Excel.

In the dialog box where it asks for Names or numbers type 5, and OK. Notice that it
will start reading data at B2. This is correct, column A has years, which it already
knows and row 1 has names, which it will read as names

Five Variables will appear in the box. These are US data for
ND nominal dividends for the year
NE nominal earnings for the year
NSP nominal standard and poors stock price index, January value
PPI producer price index, January Value
R average interest rate for the year

Highlight ND and NE in the box. Click Quick, Graph, Line Graph, OK the two
variables and you will get a graph of dividends and earnings. After looking at it, close
it and delete it. You could save it if you want.

ESTIMATE A LINEAR EQUATION


Click, Quick, Estimate an Equation, Type in
ND C NE
Notice the default estimation method is LS (least squares) but there are other methods
you could choose. There is an option box, which you can use to get Heteroskedasticity
and Autocorrelation Consistent (HAC) Standard Errors if you want.
Click OK and you will get the following output

Dependent Variable: ND
Method: Least Squares
Date: 11/05/04 Time: 15:21
Sample(adjusted): 1871 1999
Included observations: 129 after adjusting endpoints
Variable Coefficient Std. Error t-Statistic Prob.
C 0.336833 0.089865 3.748188 0.0003
NE 0.421626 0.008437 49.97162 0.0000
R-squared 0.951604 Mean dependent var 2.611163
Adjusted R -squared 0.951223 S.D. dependent var 3.984932
S.E. of regression 0.880097 Akaike info criterion 2.597813
Sum squared resid 98.37048 Schwarz criterion 2.642151
Log likelihood -165.5590 F-statistic 2497.163
Durbin-Watson stat 0.533942 Prob(F-statistic) 0.000000
On the equation Window click View. Then click Actual, Fitted Residual and then
Actual, Fitted Residual Graph. These do not look like random residuals. The Durbin-
Watson shows severe serial correlation. Close the equation and delete.

GENERATE NEW VARIABLES


Type Quick, Generate Series and type into box
LND=LOG(ND)
And OK. You will get a new series in the box LND. Similarly generate
LNE=LOG(NE)
LNSP=LOG(NSP)

ESTIMATE LOGARITHMIC DYNAMIC EQUATION AND DO


DIAGNOSTIC TESTS
Click, Quick, estimate equation and type in
LND C LNE LNE(-1) LND(-1) @TREND
@trend, is a variable that goes 1,2,3, etc. You will get

Dependent Variable: LND


Method: Least Squares
Date: 11/05/04 Time: 15:49
Sample(adjusted): 1872 1999
Included observations: 128 after adjusting endpoints
Variable Coefficient Std. Error t-Statistic Prob.
C -0.188548 0.044679 -4.220087 0.0000
LNE 0.266954 0.031936 8.359155 0.0000
LND(-1) 0.619659 0.040208 15.41149 0.0000
LNE(-1) 0.062857 0.043210 1.454707 0.1483
@TREND 0.000735 0.000655 1.123130 0.2636
R-squared 0.996177 Mean dependent var 0.037723
Adjusted R -squared 0.996052 S.D. dependent var 1.322725
S.E. of regression 0.083107 Akaike info criterion -2.099085
Sum squared resid 0.849543 Schwarz criterion -1.987677
Log likelihood 139.3414 F-statistic 8011.965
Durbin-Watson stat 1.765940 Prob(F-statistic) 0.000000

This estimates
d t = α 0 + β0 et + β1et −1 + α1d t −1 + γ t + ut

Click View, Actual Fitted Residual Graph. This looks better than before, but
there are still some outliers.
Click View on the equation box, choose Residual Tests, Serial Correlation LM
tests, and accept the default number of lags to include 2. You will get the LM serial
correlation test. Note that neither lagged residual is individually significant (t value
less than 2, p value >0.05) nor are they jointly significant F stat p value is 0.19. So we
do not have a serial correlation problem with this equation. On diagnostic tests, the
null hypothesis is that they are well specified, p values below 0.05 indicate that there
is a problem.
Click View, Residual tests, histogram- normality test. You will get the
histogram and in bottom right the JB test of 56.59 and a p value of 0.0000. There is
clearly a failure of normality, caused by the outliers.
Click View, residual, White Heteroskedasticity (no cross terms) p value is
0.24977, so no indication of heteroskedasticity.
Click View, coefficient tests, redundant variables, enter
LNE(-1) @trend
OK, you will get

Redundant Variables: LNE(-1) @TREND

F-statistic 1.499934 Probability 0.227198


Log likelihood ratio 3.084352 Probability 0.213915

Test Equation:
Dependent Variable: LND
Method: Least Squares
Date: 11/12/04 Time: 14:52
Sample: 1872 1999
Included observations: 128

Variable Coefficient Std. Error t-Statistic Prob.

C -0.128696 0.014681 -8.766476 0.0000


LNE 0.301918 0.023779 12.69656 0.0000
LND(-1) 0.670366 0.027273 24.57990 0.0000

R-squared 0.996083 Mean dependent var 0.037723


Adjusted R -squared 0.996021 S.D. dependent var 1.322725
S.E. of regression 0.083439 Akaike info criterion -2.106238
Sum squared resid 0.870263 Schwarz criterion -2.039394
Log likelihood 137.7992 F-statistic 15895.29
Durbin-Watson stat 1.847807 Prob(F-statistic) 0.000000

The F stat has a p value of 0.227, so the two variables are jointly insignificant, as
well as being individually insignificant in the previous regression, so we can drop
them. This equation estimates
d t = θ 0 + θ1et
*

∆dt = λ (d t − d t −1 ) + ε t
*

d t = λθ 0 + λθ1et + (1 − λ ) dt −1 + ε t
So the implied estimate of λ = 1 − 0.670366 = 0.329634 and of
θ1 = 0.301918/0.329634 = 0.915919 . Rather than working them out directly we can
use EViews non-linear options to get them directly.

ESTIMATE A NON-LINEAR VERSION


Close the equation and click, quick, estimate an equation again, type in

D(lnd)=c(1)*(c(2)+c(3)*lne-lnd(-1))

The D(…) first differences the data. This estimates the partial adjustment model
giving estimates of the long-run parameters and speed of adjustment directly:
∆dt = λ (θ0 + θ1et −dt −1 ) + ut
You might get results almost identical to those above

Dependent Variable: D(LND)


Method: Least Squares
Date: 11/05/04 Time: 16:13
Sample(adjusted): 1872 1999
Included observations: 128 after adjusting endpoints
Convergence achieved after 9 iterations
D(LND)=C(1)*(C(2)+C(3)*LNE-LND(-1))
Coefficient Std. Error t-Statistic Prob.
C(1) 0.329634 0.027273 12.08651 0.0000
C(2) -0.390422 0.024387 -16.00940 0.0000
C(3) 0.915917 0.015889 57.64370 0.0000
R-squared 0.567105 Mean dependent var 0.032515
Adjusted R -squared 0.560178 S.D. dependent var 0.125815
S.E. of regression 0.083439 Akaike info criterion -2.106238
Sum squared resid 0.870263 Schwarz criterion -2.039394
Log likelihood 137.7992 Durbin-Watson stat 1.847807

Notice the R squared is lower because here we are explaining the change in log
dividends (the growth rate of dividends) not the level of log dividends. The long-run
elasticity of dividends to earnings is 0.91 and significantly different from unity, the
speed of adjustment is 33% a year. Notice that this is the same equation as we had
above, with exactly the same standard error of regression.

MULTIPLE MAXIMA
However, you might get

Dependent Variable: D(LND)


Method: Least Squares
Date: 11/12/04 Time: 14:42
Sample (adjusted): 1872 1999
Included observations: 128 after adjustments
Convergence achieved after 2 iterations
D(LND)=C(1)*(C(2)+C(3)*LNE-LND(-1))

Coefficient Std. Error t-Statistic Prob.

C(1) -4.91E-05 0.040166 -0.001221 0.9990


C(2) -433.8815 354883.5 -0.001223 0.9990
C(3) -422.9887 347035.4 -0.001219 0.9990

R-squared 0.061042 Mean dependent var 0.032515


Adjusted R -squared 0.046019 S.D. dependent var 0.125815
S.E. of regression 0.122886 Akaike info criterion -1.331964
Sum squared resid 1.887615 Schwarz criterion -1.265119
Log likelihood 88.24569 Durbin-Watson stat 1.547143

This is clearly rubbish. What has happened is that the likelihood function has multiple
maxima and when the starting values are set at c(1)=0 c(2)=0 c(3)=0, it goes to this
local maximum. To get the global maximum we need to set other starting values. The
minimum is quite close to zero, so even starting the coefficients a little bit positive
will solve the problem. To do this type
param c(1) 0.05 c(2) 0.0 c(3) 0.05
in the command window at the top under the toolbar. With these starting values it will
get to the global maximum. When doing non-linear estimation, try to start with
sensible starting values, using the economic interpretation or preliminary OLS
regressions to give you sensible values.

ESTIMATE AN ARIMA(1,1,1)
Estimate an ARIMA(1,1,1) model for log stock prices.
Click quick, estimate equation, set the sample to 1871 1990 and type in
D(LNSP) C AR(1) MA(1)
Notice that both the AR (t=-3.06) and MA (t=5.50) terms are significant. Click
forecast on the equation box. Set the forecast period to 1990 2000 look at the graph. It
will save the forecast as LNSPF. Close the equation and graph LNSP and LNSPF.
This is clearly a terrible forecast, you will see that the actual and predicted steadily
diverge over the 1990s.

ESTIMATE A VECM FOR REAL INTEREST RATES


Use quick generate and type in
INF =100*(PPI(+1)-PPI)/100
To get a series for inflation. We define it this way because PPI is the January figure. Graph
the data, note that PPI inflation is more volatile than CPI inflation.
Use Quick, Estimate a VAR enter in the box for endogenous variables
R INF
Set the sample to 1950 1999
Click the Vector Error Correction button, accept the default lags. Click the cointegration box at
the top, leave the treatment of intercepts and trends as option 3, if there might have been a
trend in the cointegrating vector you would have chosen option 4. There is also a box that
allows you to impose restrictions on the adjustment coefficients A or the cointegrating vector
B.

Click OK and you will get the output below. The coefficient on inflation in the cointegrating
vector is almost exactly –1 as it should be (I chose a sample that gave a nice result), and we
cannot reject the hypothesis that the coefficient is 1; t= (1.04-1)/0.32=0.125. From the View,
Representations option you can find out that the long-run real interest rate is 2.76. Below the
cointegrating equation are given the equation for the change in interest rate (first column) and
change in inflation rate (second column). The adjustment coefficient on CointEq1 for the
interest rate is negative as it should be and quite rapid 21% a year, the adjustment coefficient
on inflation is positive, as it should be, but quite small 5% a year, and insignificant. All the
adjustment is being done by interest rates. Lagged inflation is insignificant in the interest rate
equation, but lagged interest rates are significant.

Vector Error Correction Estimates


Date: 11/26/04 Time: 11:24
Sample (adjusted): 1950 1999
Included observations: 50 after adjustments
Standard errors in ( ) & t-statistics in [ ]

Cointegrating Eq: CointEq1


R(-1) 1.000000

INF(-1) -1.040971
(0.32255)
[-3.22729]

C -2.765076

Error Correction: D(R) D(INF)

CointEq1 -0.208518 0.048463


(0.07251) (0.20009)
[-2.87589] [ 0.24221]

D(R(-1)) -0.046797 0.044142


(0.13040) (0.35985)
[-0.35888] [ 0.12267]

D(R(-2)) -0.298532 -0.954033


(0.12601) (0.34773)
[-2.36916] [-2.74358]

D(INF(-1)) -0.032239 -0.452509


(0.07111) (0.19623)
[-0.45339] [-2.30601]

D(INF(-2)) -0.002217 -0.415934


(0.05242) (0.14467)
[-0.04229] [-2.87504]

C 0.109011 0.136277
(0.24247) (0.66912)
[ 0.44959] [ 0.20367]

R-squared 0.339613 0.393428


Adj. R-squared 0.264569 0.324500
Sum sq. resides 127.9183 974.1651
S.E. equation 1.705061 4.705328
F-statistic 4.525527 5.707769
Log likelihood -94.43115 -145.1859
Akaike AIC 4.017246 6.047435
Schwarz SC 4.246689 6.276878
Mean dependent 0.077400 0.190440
S.D. dependent 1.988242 5.725023

Determinant resid covariance (dof adj.) 59.84696


Determinant resid covariance 46.34549
Log likelihood -237.7970
Akaike information criterion 10.07188
Schwarz criterion 10.60724
Click View, Cointegration Tests and you will get the output below. There are two tests, the
Trace statistic, which is more reliable and the max eigenvalue statistic. Both indicate no
cointegration at the 5% level. It then gives you the Johansen estimates of the cointegrating
vector based on the eigenvector identification, and the corresponding adjustment coefficients;
then the normalized estimates treating the first variable as the dependent variable, which are
the same as the ones given above. On the cointegration test box you can change the size of
the test. At the 10% level the trace shows 2 cointegrating vectors. You can also get the tests
summarized for all the options about trend and intercept. The results are often sensitive to
these assumption, but in this case there is no cointegration at the 5% level under all
assumptions.

Date: 11/26/04 Time: 11:44


Sample (adjusted): 1950 1999
Included observations: 50 after adjustments
Trend assumption: Linear deterministic trend
Series: R INF
Lags interval (in first differences): 1 to 2

Unrestricted Cointegration Rank Test (Trace)

Hypothesized Trace 0.05


No. of CE(s) Eigenvalue Statistic Critical Value Prob.**

None 0.175343 14.35576 15.49471 0.0737


At most 1 * 0.090016 4.716393 3.841466 0.0299

Trace test indicates no cointegration at the 0.05 level


* denotes rejection of the hypothesis at the 0.05 level
**MacKinnon-Haug-Michelis (1999) p-values

Unrestricted Cointegration Rank Test (Maximum Eigenvalue)

Hypothesized Max-Eigen 0.05


No. of CE(s) Eigenvalue Statistic Critical Value Prob.**

None 0.175343 9.639367 14.26460 0.2367


At most 1 * 0.090016 4.716393 3.841466 0.0299

Max-eigenvalue test indicates no cointegration at the 0.05 level


* denotes rejection of the hypothesis at the 0.05 level
**MacKinnon-Haug-Michelis (1999) p-values

Unrestricted Cointegrating Coefficients (normalized by b'*S11*b=I):

R INF
-0.300688 0.313007
0.221839 0.137724

Unrestricted Adjustment Coefficients (alpha):

D(R) 0.693471 -0.163405


D(INF) -0.161174 -1.320153

1 Cointegrating Equation(s): Log likelihood -237.7970

Normalized cointegrating coefficients (standard error in parentheses)


R INF
1.000000 -1.040971
(0.32255)

Adjustment coefficients (standard error in parentheses)


D(R) -0.208518
(0.07251)
D(INF) 0.048463
(0.20009)

Click View Cointegration Graph and it will plot the cointegration relation or error, the deviation
of the real interest rate from its long-run value. There are long-periods away from equilibrium,
which is why we get little evidence of cointegration.

Click View, lag structure, Granger Causality, you will get the output below, which shows that
inflation is not Granger Causal for interest rates, but interest rates is Granger Causal for
inflation. The null hypothesis is that the coefficients are zero, No Granger Causality.

VEC Granger Causality/Block Exogeneity Wald Tests


Date: 11/26/04 Time: 11:56
Sample: 1950 2000
Included observations: 50

Dependent variable: D(R)

Excluded Chi-sq Df Prob.

D(INF) 0.273535 2 0.8722

All 0.273535 2 0.8722

Dependent variable: D(INF)

Excluded Chi-sq Df Prob.

D(R) 7.527378 2 0.0232

All 7.527378 2 0.0232


Under the lag structure option, you can also get a graph of the inverse roots of the lag
polynomial. The VAR has three lags and there are two variables, so there are six roots one of
which lies on the unit circle corresponding to the stochastic trend. You can also test for
excluding lags under this option. You can do the usual diagnostic tests under the residual
tests option of View. There does not seem to be any serial correlation but the interest rate
equation shows non-normality. Under View residuals you can get the covariance or
correlation matrix, the correlation between the errors of the two equations is 0.26. You can
also get impulse response functions, which show the effect on the system of shocks to each
variable.

You might also like