0% found this document useful (0 votes)
14 views14 pages

Timeseries Examples

Uploaded by

Hamza Moussaoui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views14 pages

Timeseries Examples

Uploaded by

Hamza Moussaoui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

ECMM703 Time Series Modelling

Time Series Modelling in R — worked examples

These examples require the following R packages which are downloadable from the R web page: fArma, fGarch, fBasics,
fImport, fSeries, robustbase, fCalendar, fEcofin, fUtilities, RUnit, zoo.

1. ARMA Modelling: Simulating Processes


ARMA time series can be simulated in R using the function armaSim() from the fArma package. Its first argument
model is a list of parameters specifying an ARMA model and the second is the number of time series points to
be generated. The components of the model list are ar (a vector of the autoregressive coefficients), ma (a vector of
moving average coefficients), and (optionally) order (a vector giving specifying the order of the ARMA(p,q) process).
Note, that there exist several additional optional arguments. The returned value from armaSim() is a simulated
univariate time series of the requested length.
Simulate 2000 values from an AR(1) process with φ = −0.9 and plot the results:
lagmax <- 30; maxlen <- 1000; phi <- -0.9
par(mfrow=c(1,2))
ts.sim <- armaSim(n = maxlen, model=list(order = c(1,1), ar = phi))
plot(ts.sim,main=paste("AR(1), phi=",phi),ylab="")
acf(ts.sim,lag.max=lagmax,main=paste("AR(1), phi=",phi))
xx <- seq(0,lagmax)
points(xx,phi^
xx)

AR(1), phi= −0.9 AR(1), phi= −0.9


1.0
6
4

0.5
2

ACF
0

0.0
−2

−0.5
−4
−6

2006−04−16 2007−05−21 2008−06−23 0 5 10 15 20 25 30

Time Lag

Simulate 1000 values from an ARMA(1,1) process with φ = 0.5 and θ = 0.8:
library(fArma)
x <- armaSim(n=1000, model=list(order=c(1, 1), ar=0.5, ma=0.8))
ts.plot(x, main="ARMA(1,1): +0.5, +0.8")

ARMA(1,1): +0.5, +0.8 True ACF


6

1.0
4

0.8
2

0.6
True ACF
x

0.4
−2

0.2
−4

0.0

0 200 400 600 800 1000 0 2 4 6 8 10 12

Time Lag

The function armaTrueacf() computes the theoretical autocorrelation function for an ARMA(p,q) model up to a spec-
ified number of lags. So we can calculate and plot the theoretical ACF for the series generated above by:
armaTrueacf(model=list(order=c(1, 1), ar=0.5, ma=0.8), lag.max=12,doplot=T)
lag acf
0 0 1.0000000000
1 1 0.7459016393
2 2 0.3729508197
3 3 0.1864754098
4 4 0.0932377049
5 5 0.0466188525
6 6 0.0233094262
7 7 0.0116547131
8 8 0.0058273566
9 9 0.0029136783
10 10 0.0014568391
11 11 0.0007284196
12 12 0.0003642098

2. ARMA Modelling: Stationarity and Invertibility


The function armaRoots() from the fArma library can be used to compute and display the roots of an AR or an
MA characteristic polynomial. It takes as its argument a numeric vector with the coefficients of the characteristic
polynomial. It returns a three column data frame with the real, the imaginary part and the moduli of the roots. The
number of rows corresponds to the number of coefficients.
So we may use this function to find the roots of the polynomials (1 − 1.6B − 0.4B 2 ), (1 − 0.5B + 0.1B 2 ), and
(1 − 0.5B + 0.9B 2 − 0.1B 3 − 0.5B 4 ).
par(mfrow=c(3,2))
armaRoots(c(1.6,0.4))

re im dist
1 0.5495 0 0.5495
2 -4.5495 0 4.5495

armaRoots(c(0.5,-0.1))

re im dist
1 2.5 1.9365 3.1623
2 2.5 -1.9365 3.1623

armaRoots(c(0.5, -0.9,0.1,0.5)) par(mfrow=c(1,1))

re im dist
1 0.1275 0.8875 0.8966
2 0.1275 -0.8875 0.8966
3 -1.8211 0.0000 1.8211
4 1.3662 0.0000 1.3662

The polynomial in the first example has two real roots, one inside and the other outside of the unit circle. Thus if
this were an MA (AR) polynomial the model would be noninvertible (nonstationary). The polynomial in the second
example has two imaginary roots, both of which are outside of the unit circle. Thus if this were an MA (AR)
polynomial the model would be invertible (stationary). In the third example the roots are inside the unit circle
implying noninvertibilty in the case of a MA model and nonstationarity in the case of an AR model.

2
Polynomial Function vs. B Roots and Unit Circle

4
0

Imaginary Part

2
Function

−10

0
−2
−20

−4
−10 −8 −6 −4 −2 0 −4 −2 0 2 4

B Real Part

Polynomial Function vs. B Roots and Unit Circle


1.2

2
Imaginary Part

1
Function

0.8

0
−2
0.4

0 1 2 3 4 5 −2 −1 0 1 2

B Real Part

Polynomial Function vs. B Roots and Unit Circle


−20

Imaginary Part

1
Function

0
−60

−1
−100

−4 −3 −2 −1 0 1 2 3 −1 0 1

B Real Part

3. ARMA Modelling: Estimating ACF and PACF


The function acf() estimates and displays autocorrelation and (optionally) partial autocorrelation functions for an
observed series. The key argument is an observed time series vector. If type is set to ‘partial’ the a PACF is
produced, lag.max controls the maximum lag value. The returned value is an object containing the autocorrelation
function estimates.
We can use acf() to plot the sample autocorrelation function and the sample partial autocorrelation function for
various AR, MA and ARMA models as follows:
par(mfrow=c(4,3))
x = armaSim(n=1000, model=list(order=c(1, 0), ar=0.5))
plot(1:length(x), x, type="l", main="AR(1): +0.5")
acf(x, lag.max=12)
acf(x, lag.max=12, type="partial")
x = armaSim(n=1000, model=list(order=c(1, 0), ar=-0.5))
plot(1:length(x), x, type="l", main="AR(1): -0.5")
acf(x, lag.max=12)
acf(x, lag.max=12, type="partial")
x = armaSim(n=1000, model=list(order=c(2, 0), ar=c(0.5,0.3)))
plot(1:length(x), x, type="l", main="AR(2): +0.5, +0.3")
acf(x, lag.max=12)
acf(x, lag.max=12, type="partial")
x = armaSim(n=1000, model=list(order=c(2, 0), ar=c(-0.5,0.3)))
plot(1:length(x), x, type="l", main="AR(2): -0.5, +0.3")
acf(x, lag.max=12)
acf(x, lag.max=12, type="partial")

3
AR(1): +0.5 Series x Series x

Partial ACF
0 2

0.3
0.6
ACF
x

0.0
0.0
−3

0 200 400 600 800 0 2 4 6 8 10 12 2 4 6 8 10 12

1:length(x) Lag Lag

AR(1): −0.5 Series x Series x

Partial ACF

−0.4 −0.1
2

0.4
ACF
x

−2

−0.4

0 200 400 600 800 0 2 4 6 8 10 12 2 4 6 8 10 12

1:length(x) Lag Lag

AR(2): +0.5, +0.3 Series x Series x

Partial ACF
4

0.6

0.4
ACF
0
x

0.0
−4

0.0

0 200 400 600 800 0 2 4 6 8 10 12 2 4 6 8 10 12

1:length(x) Lag Lag

AR(2): −0.5, +0.3 Series x Series x


4

Partial ACF
0.5

0.0
ACF
0
x

−0.5

−0.6
−4

0 200 400 600 800 0 2 4 6 8 10 12 2 4 6 8 10 12

1:length(x) Lag Lag

and for some more:


x = armaSim(n=1000, model=list(order=c(0, 1), ar=0, ma=0.8))
plot(1:length(x), x, type="l", main="MA(1): +0.8")
acf(x, lag.max=12)
acf(x, lag.max=12, type="partial")
x = armaSim(n=1000, model=list(order=c(0, 1), ar=0, ma=-0.8))
plot(1:length(x), x, type="l", main="MA(1): -0.8")
acf(x, lag.max=12)
acf(x, lag.max=12, type="partial")
x = armaSim(n=1000, model=list(order=c(1, 1), ar=0.5, ma=-0.8))
plot(1:length(x), x, type="l", main="ARMA(1,1): 0.5, -0.8")
acf(x, lag.max=12)
acf(x, lag.max=12, type="partial")
x = armaSim(n=1000, model=list(order=c(1, 1), ar=-0.5, ma=-0.8))
plot(1:length(x), x, type="l", main="ARMA(1,1): -0.5, -0.8")
acf(x, lag.max=12)
acf(x, lag.max=12, type="partial")

4
MA(1): +0.8 Series x Series x
4

Partial ACF

−0.2 0.2
0.6
ACF
0
x

0.0
−4

0 200 400 600 800 0 2 4 6 8 10 12 2 4 6 8 10 12

1:length(x) Lag Lag

MA(1): −0.8 Series x Series x


4

Partial ACF

−0.1
0.5
ACF
0
x

−0.5
−0.5
−4

0 200 400 600 800 0 2 4 6 8 10 12 2 4 6 8 10 12

1:length(x) Lag Lag

ARMA(1,1): 0.5, −0.8 Series x Series x

Partial ACF

0.00
2

0.6
ACF
x

−2

−0.20
−0.2

0 200 400 600 800 0 2 4 6 8 10 12 2 4 6 8 10 12

1:length(x) Lag Lag

ARMA(1,1): −0.5, −0.8 Series x Series x


0.0
Partial ACF
4

0.5
ACF
0
x

−0.6
−0.5
−6

0 200 400 600 800 0 2 4 6 8 10 12 2 4 6 8 10 12

1:length(x) Lag Lag

Its also interesting to compare the sample ACF / PCAF with the true ACF / PACF for an ARMA process. Suppose
we take an AR(2) process with φ1 = 0.5 and φ2 = 0.3 Then the relevant code is:
par(mfcol=c(5,2))
model=list(ar=c(+0.5, 0.3))
x = armaSim(n=1000, model=model)
plot(1:length(x), x, type="l", main="AR(2): +0.5, 0.3")
acf(x, lag.max=12)
a=armaTrueacf(model, lag.max=12,doplot=T)
acf(x, lag.max=12, type="partial")
a=armaTrueacf(model, lag.max=12, type="partial",doplot=T)
model=list(ar=c(-0.5, 0.3))
x = armaSim(n=1000, model=model)
plot(1:length(x), x, type="l", main="AR(2): -0.5, 0.3")
acf(x, lag.max=12)
a=armaTrueacf(model, lag.max=12,doplot=T)
acf(x, lag.max=12, type="partial")
a=armaTrueacf(model, lag.max=12, type="partial",doplot=T)

The first model shows an exponentially decaying ACF, whereas that of the second model is oscillating. As expected,
the PACF shows the first two values to be significant.

5
AR(2): +0.5, 0.3 AR(2): −0.5, 0.3
−4 4

−4 4
x

x
0 200 400 600 800 1000 0 200 400 600 800 1000

1:length(x) 1:length(x)

Series x Series x
ACF

ACF

−0.5
0.0

0 2 4 6 8 10 12 0 2 4 6 8 10 12

Lag Lag

True ACF True ACF


True ACF

True ACF

−0.5
0.0

0 2 4 6 8 10 12 0 2 4 6 8 10 12

Lag Lag

Series x Series x
Partial ACF

Partial ACF

−0.6
0.0

2 4 6 8 10 12 2 4 6 8 10 12

Lag Lag

True PACF True PACF


True PACF

True PACF

−0.5
0.0

2 4 6 8 10 12 2 4 6 8 10 12

Lag Lag

4. ARMA Modelling: Model Identification


To illustrate the idea of model identification we can simulate an AR(5) model with parameters φ1 = −0.4, φ2 = 0.1,
φ3 = φ4 = 0 and φ5 = 0.1 and then attempt to identify the model by investigating the PACF.
First, simulate and plot the Time Series and then use the PACF to try to identify the generating model:
par(mfrow=c(2,1))
x = armaSim(n=1000, model=list(order=c(5,0), ar=c(-0.40,0.1,0,0,0.1)))
plot(1:length(x), x, type="l", main="AR(5), -0.4,0.1,0,0,0.1")
pacf=pacf(x, lag.max=12)
pacf

Partial autocorrelations of series x, by lag

1 2 3 4 5 6 7 8 9 10 11 12
-0.429 0.096 0.006 0.022 0.145 0.019 -0.035 -0.060 -0.048 -0.019 0.007 -0.004

6
AR(5), −0.4,0.1,0,0,0.1
2
0
x

−2
−4

0 200 400 600 800 1000

1:length(x)

Series x
0.0
Partial ACF

−0.2
−0.4

2 4 6 8 10 12

Lag

5. ARMA Modelling, Parameter Estimation, model selection and diagnosis


The function armaFit() estimates the parameters of ARMA models. Arguments are described on the help page.
We consider the time series generated in the previous example from an AR(5) model with parameters φ1 = −0.4,
φ2 = 0.1, φ3 = φ4 = 0 and φ5 = 0.1. Suppose we are given this series, not knowing how it was generated.
Examination of the PACF (see above) reveals significant correlation at lag 5, after which the correlation is negligible.
This suggests to use an ARMA(p) model with p = 5. We first apply the function armaFit to estimate the parameters
of an AR(5) model.
First simulate data from the process:
par(mfrow=c(2,2))
x = armaSim(n=1000, model=list(order=c(5,0), ar=c(-0.40,0.1,0,0,0.1)))

Next fit an AR(5) model:


fit<-armaFit(x~ar(5),x,method="mle")
summary(fit)

Model:
ARIMA(5,0,0) with method: CSS-ML

Coefficient(s):
ar1 ar2 ar3 ar4 ar5 intercept
-0.419200 0.108544 0.006913 -0.004710 0.146163 -0.054552

Residuals:

7
Min 1Q Median 3Q Max
-3.36283 -0.65182 0.02615 0.65574 3.19371

Moments:
Skewness Kurtosis
-0.1242 0.1234

Coefficient(s):
Estimate Std. Error t value Pr(>|t|)
ar1 -0.419200 0.031291 -13.397 < 2e-16 ***
ar2 0.108544 0.033978 3.195 0.0014 **
ar3 0.006913 0.034145 0.202 0.8396
ar4 -0.004710 0.034024 -0.138 0.8899
ar5 0.146163 0.031329 4.665 3.08e-06 ***
intercept -0.054552 0.027412 -1.990 0.0466 *
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1

sigma^2 estimated as: 1.016


log likelihood: -1427.07
AIC Criterion: 2868.15

Note that summary() also provides the estimate of the variance σ 2 of the white noise process. The values of the
AR coefficients of order 3 and 4 are small and the associated standard errors are large: as a consequence, these
coefficients have large p-values (last column) and are not statistically significant according to a 5% t-test. It is
therefore a good idea to fit an AR(5) process in which these coefficients (as well as the intercept) are fixed to zero.
This can be specified with the parameter fixed=c():
fit<-armaFit(x~ar(5),x,fixed=c(NA,NA,0,0,NA,0),method="mle")
par(mfrow=c(2,2))
summary(fit)

Model:
ARIMA(5,0,0) with method: CSS-ML

Coefficient(s):
ar1 ar2 ar3 ar4 ar5 intercept
-0.3564 0.1135 0.0000 0.0000 0.1231 0.0000

Residuals:
Min 1Q Median 3Q Max
-3.13847 -0.66654 -0.01819 0.68648 3.36718

Moments:
Skewness Kurtosis
0.07226 -0.02576

Coefficient(s):
Estimate Std. Error t value Pr(>|t|)
ar1 -0.35642 0.03115 -11.441 < 2e-16 ***
ar2 0.11350 0.03120 3.637 0.000275 ***
ar3 0.00000 0.02861 0.000 1.000000
ar4 0.00000 0.03115 0.000 1.000000
ar5 0.12309 0.03120 3.945 7.98e-05 ***
intercept 0.00000 0.02861 0.000 1.000000
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1

sigma^2 estimated as: 1.095


log likelihood: -1464.51
AIC Criterion: 2937.02

8
Standardized Residuals ACF of Residuals

1.0
3

0.8
2

0.6
1
Residuals

ACF
0

0.4
−1

0.2
−2

0.0
−3

0 200 400 600 800 1000 0 5 10 15 20 25 30

Index Lag

QQ−Plot of Residuals Ljung−Box p−values


1.0
3

0.8
2
Residual Quantiles

0.6
p value
0

0.4
−1

0.2
−2

0.0
−3

−3 −2 −1 0 1 2 3 2 4 6 8 10

Normal Quantiles lag

The summary() method automatically plots the residuals, the autocorrelation function of the residuals, the standard-
ized residuals, and the Ljung-Box statistic (test of independence).
In order to investigate the model fit we could estimate the parameters for various ARMA(p,q) models with pmax = 5
and qmax = 2 for the same simulated time series and compare the relative fits through the AIC value (leave this up
to you).
6. Forecasting from ARMA models
The function predict() forecasts a univariate time series using an ARMA (or ARIMA) model. Predicted values and
their standard errors are computed for future values. The function returns a time series of predictions and the
estimated standard errors.
We attempt to forecast the AR(5) model from the previous examples on 10 data points. First simulate Time Series
and Fit Parameters:
x = armaSim(n=1000, model=list(order=c(5,0), ar=c(-0.40,0.1,0,0,0.1)))
fit<-armaFit(x~ar(5),x,fixed=c(NA,NA,0,0,NA,0),method="mle")
Now predict 10 steps ahead:
predict(fit,10)

$se
Time Series:
Start = 1001
End = 1010
Frequency = 1
[1] 1.046968 1.132120 1.177395 1.193136 1.199626 1.199695 1.199703 1.199870 1.200052
[10] 1.200205

$out
Time Series:
Start = 1001
End = 1010
Frequency = 1
Low 95 Low 80 Forecast High 80 High 95
1001 -2.0636 -1.3533 -0.0116 1.3301 2.0404
1002 -2.1160 -1.3480 0.1029 1.5537 2.3218
1003 -2.2693 -1.4705 0.0384 1.5473 2.3460
1004 -2.3565 -1.5470 -0.0180 1.5111 2.3205
1005 -2.3572 -1.5434 -0.0060 1.5314 2.3452
1006 -2.3521 -1.5382 -0.0008 1.5367 2.3506
1007 -2.3455 -1.5316 0.0059 1.5434 2.3573
1008 -2.3518 -1.5378 -0.0001 1.5376 2.3516
1009 -2.3523 -1.5382 -0.0002 1.5377 2.3518
1010 -2.3527 -1.5384 -0.0003 1.5378 2.3521

9
ARIMA(5,0,0) with method: CSS−ML
2
1
Series: x

0
−1
−2

950 960 970 980 990 1000 1010

Time

7. An example with real data


The file gmt.dat containing the global mean temperature time series can be downloaded from the course website.
The autocorrelations of the time series decay very slowly. This can be an indication of non-stationarity. Indeed, a
plot of the time series clearly shows the presence of an upward trend. Differencing the series might be sufficient to
remove this.
temp.ts <- read.table("gmt.dat")
temp <- temp.ts[,2]
par(mfrow=c(2,3))
plot(temp.ts,type="b")
acf(temp,main="ACF of temp")
acf(temp,type="partial",main="PACF of temp")
nn <- length(temp)
tempdiff <- temp[2:nn]-temp[1:(nn-1)]
plot(tempdiff,pch=20)
acf(tempdiff,main="ACF of temp (differenced)")
acf(tempdiff,type="partial",main="PACF of temp (differenced)")

We see that the differenced series looks reasonably stationary. The (partial) autocorrelations become negligible
after lag 2 (3). This suggests an ARMA(p,q) model with p = 3 and q = 2 (at most). We try different models,
searching for an adequate fit.
fit10 <- armaFit(tempdiff~arma(1,0),tempdiff)
fit20 <- armaFit(tempdiff~arma(2,0),tempdiff)
fit21 <- armaFit(tempdiff~arma(2,1),tempdiff)
fit22 <- armaFit(tempdiff~arma(2,2),tempdiff)
par(mfrow=c(2,2))
summary(fit10); summary(fit21)

The ARMA(1,0) fit is poor: there is significant residual autocorrelation at lag 2, the tails in the QQ plot deviate
from a normal and the Ljung-Box p-values are very small. Similar considerations hold for the ARMA(2,0) fit. The
ARMA(2,1) and ARMA(2,2) fits look reasonable: which of these two models will we select? Let us look at AIC:
AICs <- c(fit10@fit$aic,fit20@fit$aic,fit21@fit$aic,fit22@fit$aic)
minAIC <- min(AICs)
AICs-minAIC
36.780872 20.746307 0.000000 1.999948

The minimum AIC is achieved by the ARMA(2,1). In this case, the AIC criterion agrees with the principle of
parsimony (choose the model with less parameters). Lastly, let us try to forecast the time series. We repeat the fit

10
ACF of temp PACF of temp

0.8
15.4

1.0
0.8
15.2

0.6
0.6
15.0

0.4
Partial ACF
ACF
V2

0.4
14.8

0.2
0.2
14.6

0.0
0.0
14.4

−0.2

−0.2
1880 1900 1920 1940 1960 1980 2000 0 5 10 15 20 5 10 15 20

V1 Lag Lag

ACF of temp (differenced) PACF of temp (differenced)

1.0
0.4

0.1
0.8
0.2

0.6

0.0
Partial ACF
tempdiff

0.0

0.4
ACF

−0.1
0.2
−0.2

−0.2
0.0

−0.3
−0.4

−0.2

0 20 40 60 80 100 120 0 5 10 15 20 5 10 15 20

Index Lag Lag

with the function arima(). We first transform temp into an object of class ts.
temp <- ts(temp)
nobs <- length(temp)
fit <- arima(temp, order=c(2,1,1), xreg=1:nobs)
fore <- predict(fit,15,newxreg=(nobs+1):(nobs+15))

The argument xreg is added to the function arima() to fix an issue with R. The variable fore now contains forecasts
for the next 15 values of the time series. We plot the time series (in black) and the forecasts (red), adding prediction
intervals (blue, dashed): upper and lower bounds of the prediction intervals are U,L, respectively.
ts.plot(temp,fore$pred,col=1:2,main="RIGHT")
U <- fore$pred + 2*fore$se
L <- fore$pred - 2*fore$se
minx <- min(temp,L)
maxx <- max(temp,U)
ts.plot(temp,fore$pred,col=1:2, ylim=c(minx,maxx))
lines(U, col="blue", lty="dashed")
lines(L, col="blue", lty="dashed")

11
Standardized Residuals ACF of Residuals Standardized Residuals ACF of Residuals

1.0

1.0
3

2
0.8

0.8
2

0.6

0.6
1
Residuals

Residuals
0.4
ACF

ACF
0

0.4
0
0.2
−1

0.2
−1
0.0
−2

0.0
−2
−3

−0.4

−0.2
0 20 40 60 80 100 120 0 5 10 15 20 0 20 40 60 80 100 120 0 5 10 15 20

Index Lag Index Lag

QQ−Plot of Residuals Ljung−Box p−values QQ−Plot of Residuals Ljung−Box p−values


1.0

1.0
3

2
2

0.8

0.8
Residual Quantiles

Residual Quantiles

1
1

0.6

0.6
p value

p value
0

0
0.4

0.4
−1

−1
0.2

0.2
−2

−2
−3

0.0

0.0
−2 −1 0 1 2 2 4 6 8 10 −2 −1 0 1 2 2 4 6 8 10

Normal Quantiles lag Normal Quantiles lag


15.6
15.4
15.2
15.0
14.8
14.6
14.4

0 50 100 150

Time

12
8. Simulating and fitting of GARCH models
The function garchSim() in the fGarch library can be used to to simulate models from the GARCH family. For example
Engles ARCH(2) and Bollerslevs GARCH(1,1) model, both with normal distributed innovations, can be simulated
as follows:
library(fGarch)
par(mfrow=c(2,1))
model=list(omega=1e-6, alpha=c(0.1, 0.3))
x=garchSim(model)
plot(x, type="l", main="ARCH(2) Model")
model=list(omega=1e-6, alpha=0.1, beta=0.8)
x=garchSim(model)
plot(x, type="l", main="GARCH(1,1) Model")

ARCH(2) Model
0.004
0.001
x

−0.002

0 20 40 60 80 100

Time

GARCH(1,1) Model
0.005
x

−0.005

0 20 40 60 80 100

Time

The function garchFit() estimates parameters from GARCH models. For example we can use it to estimate the
parameters from Bollerslevs GARCH(1,1) model with normal distributed innovations.
model=list(omega=1e-6, alpha=0.1, beta=0.8)
x=garchSim(model)
fit<-garchFit(~garch(1,1),x,include.mean=F)

Now perform a summary and diagnostic analysis:


summary(fit)
par(mfrow=c(2,2))
plot(fit,which=c(9,10,13))
plot(fit@residuals,type="h",xlab="Time",ylab="Residuals",main="Residuals")

Title:
GARCH Modelling

Mean and Variance Equation:


~arma(0, 0) + ~garch(1, 1)

Conditional Distribution:
dnorm

Coefficient(s):
omega alpha1 beta1
3.66853e-06 1.16462e-01 5.02154e-01

Error Analysis:
Estimate Std. Error t value Pr(>|t|)
omega 3.669e-06 3.478e-06 1.055 0.291
alpha1 1.165e-01 1.222e-01 0.953 0.340
beta1 5.022e-01 4.065e-01 1.235 0.217

Log Likelihood:
-436.9633 normalized: -4.369633

Information Criterion Statistics:


AIC BIC SIC HQIC
8.799266 8.877421 8.797535 8.830897

13
Standardized Residuals ACF of Standardized Residuals

0.2 0.4 0.6 0.8 1.0


0.005

ACF
sres

0.000
−0.005

−0.2
0 20 40 60 80 100 0 5 10 15 20

Index Lags

qnorm − QQ Plot Residuals


0.005

0.005
Sample Quantiles

Residuals
0.000

0.000
−0.005

−0.005

−2 −1 0 1 2 0 20 40 60 80 100

Theoretical Quantiles Time

14

You might also like