0% found this document useful (0 votes)
4 views

Assignment 1

Assignment 1 of econometrics
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Assignment 1

Assignment 1 of econometrics
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

KØBENHAVNS UNIVERSITET

Assignment 1

Assignment 1 - The German Recovery Following The Financial Crisis

September 3, 2023
Contents
1 Introduction 3

2 Econometric theory 3

3 Description of the data 5

4 Empirical analysis 6

5 Discussion 10

6 Conclusion 10

2
1 Introduction
In our report we estimate a model for the yearly growth rate in the German GDP, followed with
a forecast of the recovery in the period, 2009(2)-2019(4). Our analysis consists of basic univariate
time series models, which we use to investigate whether the economic theory corresponds with the
empirical data we are given, as well as the correctness of the forecast made from the model. Through
the assignment, our analysis is based on plausible assumptions and hypothesis testing to find the best
model.

After finding the best solution for both the AR-model and ARMA-model, we find the AR-model
provides the most promising forecast. Furthermore the forecast suggests that the growth rate in
German GDP recover to the same rate as before the financial crisis in 2009.

2 Econometric theory
For the purpose of forcasting the German recovery, given the time series data, the obvious choice
of approach would be a univariate time series model, where the models in question would be the
AR-model and the ARMA-model. Our point of departure would be to estimate the AR(p) model,
where the derivation of the number of lagged variables, p, will be done by inference and graphical
analysis.

For this section of the assignment, the first difference variable, ∆4 log (GDPt ), will be our dependent
variable which has been derived from the economical point-of-view that the GDP is seasonally-affected
(this will be elaborated further in the following sections). For the estimation of time series model,
the two main assumptions are assumed to hold:
• Stationarity

• Weak dependence
The stationarity assumption implies that the mean, variance and covariance, of the yearly growth
rate are constant over time. For our data set, the following should therefore hold:
E {∆4 log (GDPt )} = µ < ∞
V {∆4 log (GDPt )} = γ0 < ∞ (1)
cov {∆4 log (GDPt ) , ∆4 log (GDPt−p )} = γp , k = 1, 2, . . .
For this assignment, we can determine whether the stationarity assumption holds, by a graphical
analysis of the data over time.

The weak dependence assumption implies that the dependency of D4 log(GDP )t and D4 log(GDP )t−h
reduces over time, and approximately becomes independent for h → ∞. Thus meaning when time
increases the effect of a fixed lagged variable becomes less significant .

We can define the AR(p) model, with the dependent variable D4log(GDP) as:

D4log(GDP )t = c + θ1 D4log(GDP )t−1 + . . . + θp D4log(GDP )t−p + ϵt

For our model we assume that the white noise process, ϵt , is I.I.D1 , and is Gaussian distributed2 ,
d 
with a mean of zero, and a constant variance, ϵt = N 0, σ 2 . For the derivation of our model we
1 independent and identically distributed
2 An assumption supported by macro-econometric theory

3
assume stationarity and weak dependency.
Given the autoregressive polynomial:

θ(z) = 1 − θ1 z − θ2 z 2 − . . . − θp z p , z∈C

the assumption of stationarity and weak dependency, implies that the autoregressive polynomial is
invertiable, and that the roots of the characteristic equation are less than one.

For the ARMA model the assumptions of stationarity, and weak dependence implies the same charac-
teristics as for the AR model, as the ARMA model is constructed of autoregressive terms and moving
average terms, where it is known from the econometric theory that the AR, and MA model share the
same process.

We can define our ARMA model as:

∆4 log (GDPt ) = θ1 ∆4 log (GDPt−1 ) + . . . + θp ∆4 log (GDPt−p ) + ϵt + α1 ϵt−1 + . . . + αq ϵt−q (2)

The derivation of the amount of terms, to include in our model consists of inference, and specifically
LR-test, to test for the different values of p and q. It is important for us to take account for cancelling
roots, which can be caused by too many lagged variables.

The approach used to derive number of lags, p, in the AR(p) model, consist of t-test, in which
the purpose is to remove the insignificant terms, to make the model more parsimonious.

For the t-test we define the null-hypothesis and alternative-hypothesis:

H0 : θ = 0

HA : θ ̸= 0

Under normality, the t-test follows a Gaussian distribution, in which the critical value is 1.96, with a
significance level of α = 0.05.

To derive the correct number of lags,in the ARMA(p,q) model, our point of departure should be
by start with a big model, and reducing it. Using this approach, we can apply the Likelihood Ratio
Test, to compare the models. The LR statistic can be defined as following:
!
L(ARM AF ull )
LR = −2 log
L(ARM AReduced)

Where L(ARM AF ull) and L(ARM AReduced) are the two log likelihoods, and the LR test statistic
would have a χ2 (j), with j-degrees of freedom. Depending on the number of lags, the general H0 ,
and HA would be:
H0 : ARM AF ull = ARM AReduced

HA : ARM AF ull ̸= ARM AReduced

The null-hypothesis would claim, that there would be no difference in the two models, where as the
alternative-hypothesis insinuates that ARM AF ull would be the better choice of the two. As the
critical value is dependent on the degree of freedom, we can only define the significant-level, which is
α = 0.05.

4
To test for normality, we will plot the residuals, and do a graphical analysis, to check for an Gausian
distribution in our model. In the event of this outliers, we can include dummy variables, to capture
the effect.

To test for autocorrelation in our AR-model, our approach would consist of a no-autocorrelation
test, based on the auxiliary regression:

ϵ̂t = x′t δ + γ1 ϵ̂t−1 + . . . + γh ϵ̂t−h + ut

Where the x′t term indicates the explanytory variables, and is included as it might be correlated to
the white noise process terms. We define our null-hypothesis and alternative-hypothesis as following:

H0 : γ 1 = . . . = γ h = 0

HA : some γj ̸= 0
The null-hypothesis claims that the error terms are uncorrelated, and with a mean zero. An alter-
native way to test for autocorrelation, is to apply the AFC and PAFC functions, and do a graphical
analysis off Oxmetric.

Autocorrelation testing for the ARMA model consists of a Portmanteau test, where we derive the
Portmanteau test statistics as:
h
X ρ2j
ξLB = T 2
j=1
T −j
Where we can define ρj as:
PT T
t=j+1 (ϵ̂t − ϵ̄) (ϵ̂t−j − ϵ̄) X
ρj = PT 2
, with ϵ̄ = T −1 ϵ̂t
t=1 (ϵ̂t − ϵ̄) t=1

We can then define the null-hypothesis and alternative hypothesis as:


d
H0 : ξLB → χ2 (h − (p + q))

HA : ξLB doesn’t follow a χ2 (h − p)


For the Portmanteau test, we test to see if the residuals in the time series are different from zero, and
the H0 states that the test statistic is approximately chi-squared distributed.

To evaluate the forecast, the approach consist of a graphical analysis, in which we are attentive
towards the empirical data fitting the error bars and a comparison toward the simple estimation,
which is given in our data set. Another option to evaluate the forecast is to calculate the average
forecast error, and to see if it is systematically biased.

3 Description of the data


The dataset used are from the FRED database maintained by the Federal Reserve Bank of St. Louis
and contains quarterly data for the real gross domestic product in Germany for the period 1991(1)-
2022(2). The dataset provide two transformations, log(GDP) and more interestingly D4log(GDP),
which is the yearly growth rate, defined as:

D4 log(GDP) = ∆4 log (GDPt ) = log (GDPt ) − log (GDPt−4 )

5
D4log(GDP) is more efficient to use than a Dlog(GDP) since studies e.g. from European Central
Bank clearly shows seasonal variations significantly affect quarterly GDP3 . In the dataset we also get
the Simple forecast, stating that the yearly growth rate from 2009(2) and onwards approximately is
0.015.

Figure 1: log(GDP) Figure 2: D4log(GDP)

In figure 1 we observe GDP is upward trending and from figure 2 we see D4log(GDP appears to
be a stationary variable. From both figures it’s observed that we have an outlier in the year 2009,
which correspond to the fact that Germany was affected by the financial crises, it will be something
we have to control for.

Figure 3: Autocorrelation function (ACF) and partial autocorrelation function (PACF)

An analysis of PACF shows that the 1. and 4. lag seems to be significant, we do also observe
that the 5. and 8. lag could be significant. When working with ARMA it’s suggested to start with a
moderate model so this analysis could suggest to start with 4-5 lags.

4 Empirical analysis
For the derivation of the different estimations, we have used Oxmetrics. As mentioned previously, we
use the time series data models, to find the model which predicts the German recovery.

For the estimation of the AR(p) model, we let p = 5, as explained previously, this captures the
effect of a whole year, and this way we get to see the quarterly effect on the yearly growth rate. In
addition it is important to emphasize that given the outlier in the year 2009(1), we are in risk of
3 https://www.ecb.europa.eu/pub/pdf/other/mb200608_focus04.en.pdf

6
rejecting normality, and therefore add the lagged variable: 2009(1) to our model. As our approach
is to derive a model consisting of a combination of fit and parsimony, we remove any insignificant
terms. The estimation when p = 5, will be made from the sample selection 1993(1) - 2009(1).

Table 1: Estimation results AR.

Variable\Method AR(5) AR
Constant 0.004 0.005
(0.009) (0.002)
D4log(GDP )1 0.865 0.882
(0.131) (0.077)
D4log(GDP )2 0.091** -
(0.169) -
D4log(GDP )3 -0.054** -
(0.166) -
D4log(GDP )4 -0.314* -0.212
(0.169) (0.078)
D4log(GDP )5 0.122** -
(0.131) -
dummy20091 -0.055 -0.056
(0.009) (0.009)

AR 1-4 0.74 0.47


Normality Test 6.43 4.78
(0.04) (0.09)
R2 0.80 0.80
N 64 64
Notes: The values in parentheses are the non-robust standard errors.

From the tabel it can be seen that we reject D4log(GDP )2 ,D4log(GDP )3 , D4log(GDP )4 and
D4log(GDP )5 on a 5% significans, though it is important to note, that on a 10% significans we
cannot reject D4log(GDP )4 , which could be caused by the fact that due to seasonality. From a
macro theoretical approach, we are going to include D4log(GDP )4 , such that our AR(5) model
becomes an AR(2) model, with the variables:

D4log(GDP )t = c + θ1 D4log(GDP )1 + θ2 D4log(GDP )4 + ϵt

The AR(2) estimation shows that with 2 degree of freedom, we cannot reject that there is no-
autocorrelation as our p-value is 74%, which is relativly high. The estimated model looks well
specified, as we cannot rejected the normality test either.

7
Figure 4: Residuals

From the graph it can be seen, that the distribution follows approximately a Gaussian distribu-
tion, where there might be a few outliers, but the results from Oxmetrics, doesn’t reject normality
which is a good indication.

For the derivation of the ARMA model our choice of p and q, will be determined from the anal-
ysis of the PAFC table, which shows that the fourth lag is significant, thus we can set p=4 and q=4,
where we would define our point of departure will be:

∆4 log (GDPt ) =c + θ1 ∆4 log (GDP1 ) + θ2 ∆4 log (GDP2 ) + θ1 ∆4 log (GDP3 ) + θ4 ∆4 log (GDP4 )
+ ϵt + α1 ϵ1 + α2 ϵ2 + α3 ϵ3 + α4 ϵ4
(3)
As stated previously our approach will consist of a LR-test, where we compare the ARMA models:
ARMA(4,4), ARMA(4,2), ARMA(2,2) and ARMA(2,1)

Table 2: Information criterion.

Log-likelihood SC HQ AIC
Arima(4,2) 238.07 -6.23 -6.44 -6.58<
Arima(4,2) 232.70 -6.19 -6.37 -6.48
Arima(2,2) 232.95 -6.32 -6.32 -6.55
Arima(2,1) 232.94 -6.38< -6.50< -6.58
Note: < indicates the largest value of each criteria

From the LR-test done by Oxmetrics, it can be seen from the table that the SC - and HQ infor-
mation criterion points at ARMA(2,1) being our best choice. We then specify our ARMA(2,1) model
as:

∆4 log (GDPt ) =c + θ1 ∆4 log (GDP1 ) + θ2 ∆4 log (GDP2 ) + ϵt + α1 ϵ1 (4)

8
Table 3: Estimation results ARMA

Variable\Method ARMA(2,1)
Constant 0.014
(0.008)
AR1 1.776
(0.063)
AR2 -0.851
(0.064)
M A1 -1.000
(0.044)
dummy20091 -0.055
(0.008)

Normality Test 1.889


(0.390)
Portmanteau 6.13
(0.898)
N 69

The output of Oxmetric shows that there there is no autocorrelation, as we cannot reject the H0 ,
as well as normality. We can therefore conclude that our ARMA model is quite well specified.

We create a forecast for the chosen ARMA-model and for the chosen AR-model :

Figure 5: Forecast of 43 periods using the AR-model

9
Figure 6: Forecast of 43 periods using ARMA-model

Here we can clearly see, that both models converges towards approximately 0.015, which is the
value of the Simple forecast. We can also see that the actual values more or less stays within the
frame of each models confidence interval, which is suggests a good fit for both the models.

5 Discussion
When graphically analysing our forecasts in figure 5 and 6 respectively made from the AR- and
ARMA model we find that the forecast made from the AR model performs best since it’s way better at
following the actual trends in the German GDP. We do also observe that the Simple forecast actually
is a good forecast on the long term as it looks like the average growth rate still is approximately
0.015.

In our approach we found that the use of a dummy-variable was significant since the model without
rejected both normality and heteroscedasticity.

As we know the AR model has a weakness of long-term forecasting and we can actually observe it
in figure 5 where we see it’s much better at following actual observations in the first 5 years than in
the last years. We can argument the forecast in this scenario performs well since the German GDP
growth rate went back to the approximately 0.015 but we does actually see the weakness of using AR
to forecast.

6 Conclusion
We have successfully achieved 2 adequate models, one AR-model and one ARMA-model. For finding
the best AR-model we used a combination of normality tests and AR 1-4 test, the most suitable
solution for the model. For the ARMA-model a mix of normality test and Portmanteau test was
used. Out of these solutions we have found that the AR-model achieved the most promising forecast.
Furthermore this model predicts that the German GDP returns to its former average growth rate
from 1992(1)-2007(4).

10

You might also like