0% found this document useful (0 votes)
114 views37 pages

FinAnalyticsSolutions1236 PDF

The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like anxiety and depression.

Uploaded by

AY6061
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views37 pages

FinAnalyticsSolutions1236 PDF

The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like anxiety and depression.

Uploaded by

AY6061
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Solution Manual to

Financial Analytics with R


Chapters 1,2,3,6
Mark Bennett Dirk Hugen
December 14, 2016

Analytic Thinking
1.1  
1.14 − 1
= 0.14 ∼ 14%
1

The R Language for Statistical Computing


2.1
> x = c(1.3,1.2,1.3,NA,1.4,1.5)
> plot(x,ylab="EUR prices")

1
1.50
1.45
1.40
EUR prices

1.35
1.30
1.25
1.20

1 2 3 4 5 6

Index

> homeuser="<basedir>"
> library(ggplot2)

> x[x>1.3]

[1] NA 1.4 1.5

> y <- diff(log(x))


> round(y,3)

[1] -0.080 0.080 NA NA 0.069

> g <- function(x,y=5) { return(x^y) }


> g(4)

[1] 1024

> g(4,6)

[1] 4096

> g(4,y=7)

[1] 16384

2
> g(y=8,x=4)
[1] 65536
> g
function(x,y=5) { return(x^y) }
> x <- 1
> assign("x",2)
> x=3
> f <- function(x)
+ {
+ x=4
+ x
+ }
> f(x)
[1] 4
> x
[1] 3
> x = 3
> x
[1] 3
> f <- function(x)
+ {
+ x <<- 4
+ }
> f(x)
> x
[1] 4
> typeof(f)
[1] "closure"
> typeof(x)
[1] "double"
> call_type = 2
> if(call_type == 1){
+ str = "f(2)"
+ } else {
+ str = "g(2)"
+ }
> eval(parse(text=str))

3
[1] 32

> call_type = 2
> ifelse(call_type == 1,
+ eval(parse(text="f(2)")),
+ eval(parse(text="g(2)")))

[1] 32

> set.seed(1)
> vec = c(1:3)
> sapply(vec,rnorm)

[[1]]
[1] -0.6264538

[[2]]
[1] 0.1836433 -0.8356286

[[3]]
[1] 1.5952808 0.3295078 -0.8204684

> A = cbind(rep(x,length(y)),y)
> A

y
[1,] 4 -0.08004271
[2,] 4 0.08004271
[3,] 4 NA
[4,] 4 NA
[5,] 4 0.06899287

> B = rbind(rep(x,length(y)),y)
> B

[,1] [,2] [,3] [,4] [,5]


4.00000000 4.00000000 4 4 4.00000000
y -0.08004271 0.08004271 NA NA 0.06899287

> t(A) == B

[,1] [,2] [,3] [,4] [,5]


TRUE TRUE TRUE TRUE TRUE
y TRUE TRUE NA NA TRUE

> sum(t(A) == B)

[1] NA

4
> B[,4]

y
4 NA

> B[,-4]

[,1] [,2] [,3] [,4]


4.00000000 4.00000000 4 4.00000000
y -0.08004271 0.08004271 NA 0.06899287

> t(A)[,-4] == B[,-4]

[,1] [,2] [,3] [,4]


TRUE TRUE TRUE TRUE
y TRUE TRUE NA TRUE

> sum(t(A)[-2,-4] == B[-2,-4])

[1] 4

> n <- 12
> z <- 1:n
> z

[1] 1 2 3 4 5 6 7 8 9 10 11 12

> z <-c(1:n)
> z <- vector(length = n)
> for(i in 1:n)
+ z[i] <- i
> z

[1] 1 2 3 4 5 6 7 8 9 10 11 12

> mat2by4 <- matrix(1:8, nrow=2, ncol=4)


> mat2by4

[,1] [,2] [,3] [,4]


[1,] 1 3 5 7
[2,] 2 4 6 8

> arr2by4by3 <- array(1:24, dim=c(2,4,3))


> arr2by4by3

, , 1

[,1] [,2] [,3] [,4]


[1,] 1 3 5 7

5
[2,] 2 4 6 8

, , 2

[,1] [,2] [,3] [,4]


[1,] 9 11 13 15
[2,] 10 12 14 16

, , 3

[,1] [,2] [,3] [,4]


[1,] 17 19 21 23
[2,] 18 20 22 24
> arr2by4by3[1,,]
[,1] [,2] [,3]
[1,] 1 9 17
[2,] 3 11 19
[3,] 5 13 21
[4,] 7 15 23
> arr2by4by3[1,-4,]

[,1] [,2] [,3]


[1,] 1 9 17
[2,] 3 11 19
[3,] 5 13 21
> arr2by4by3[1,c(-3,-4),]

[,1] [,2] [,3]


[1,] 1 9 17
[2,] 3 11 19
> length(c(-3,-4))

[1] 2
> dim(arr2by4by3[1,c(-3,-4),])
[1] 2 3
> A <- arr2by4by3[1,c(-3,-4),]
> t(A)
[,1] [,2]
[1,] 1 3
[2,] 9 11
[3,] 17 19

6
> A <- arr2by4by3[1,c(-3,-4),]
> A

[,1] [,2] [,3]


[1,] 1 9 17
[2,] 3 11 19

> A%*%t(A)

[,1] [,2]
[1,] 371 425
[2,] 425 491

> 1+9*9+17*17

[1] 371

> fh <- 0
> tryCatch({
+ #main block
+ fh <<- file("file1.txt",open="r")
+ }, warning = function(w){
+ #warning handler code
+ print(w)
+ fh <<- NA
+ }, error = function(e){
+ #error handling code
+ print(e)
+ fh <<- NA
+ }, finally = {
+ #cleanup code
+ })

<simpleWarning in file("file1.txt", open = "r"): cannot open file 'file1.txt': No such file

> if(!is.na(fh)) readLines(fh)

> fh

[1] NA

> options(digits=10)
> pi = 3.1415926535897932384626
> pi

[1] 3.141592654

> plot(density(rbinom(50,50,1/2)))

7
density.default(x = rbinom(50, 50, 1/2))
0.14
0.12
0.10
0.08
Density

0.06
0.04
0.02
0.00

15 20 25 30 35

N = 50 Bandwidth = 0.9214

> options(digits=6)
> set.seed(99)
> sample(10,replace = TRUE)

[1] 6 2 7 10 6 10 7 3 4 2

> print(paste("PCLN","UNP","IBM","MCD","PFE", sep=","))

[1] "PCLN,UNP,IBM,MCD,PFE"

> date <- as.Date("2014-02-01")


> substr(date,9,11)

[1] "01"

> tickers <- c("PCLN","UNP","IBM","MCD","PFE")


> match('MCD',tickers)

[1] 4

> L3 <- LETTERS[1:3]


> fac <- sample(L3, 10, replace = TRUE)
> d <- data.frame(x = 1, y = 1:10, fac = fac)
> d[1:4,]

8
x y fac
1 1 1 B
2 1 2 B
3 1 3 A
4 1 4 B

> d$fac

[1] B B A B C B B A A A
Levels: A B C

> write.csv(d, file="d.txt", row.names=FALSE)


> e <- read.csv("d.txt", header=TRUE)
> e[1:4,]

x y fac
1 1 1 B
2 1 2 B
3 1 3 A
4 1 4 B

> names(e)

[1] "x" "y" "fac"

> names(e) <- c(names(e)[1:2],"factor")


> e[-c(2:dim(e)[1]),]

x y factor
1 1 1 B

> typeof(e)

[1] "list"

> #setwd(paste(homeuser,"/FinAnalytics/ChapXI",sep=""))

> c(1,c(1,2),3,"A",c(4,5))

[1] "1" "1" "2" "3" "A" "4" "5"

> list(1,c(1,2),3,"A",list(4,5))

[[1]]
[1] 1

[[2]]
[1] 1 2

9
[[3]]
[1] 3

[[4]]
[1] "A"

[[5]]
[[5]][[1]]
[1] 4

[[5]][[2]]
[1] 5
> l <- list(1,c(1,2),3,"A",list(4,5))
> l[2]
[[1]]
[1] 1 2

> l[[2]]
[1] 1 2
> e[[1]]

[1] 1 1 1 1 1 1 1 1 1 1
> e[[2]]
[1] 1 2 3 4 5 6 7 8 9 10
> e[[3]]

[1] B B A B C B B A A A
Levels: A B C
> obtainPrices <- function() {
+ A <- matrix(c("VRSN","UNP","HPQ","NSC", nrow=1))
+ B <- matrix(c(37.61, 125.62, 50.48, 50.44), nrow=1)
+ list(A,B)
+ }
> res <- obtainPrices()
> res[[1]]
[,1]
[1,] "VRSN"
[2,] "UNP"
[3,] "HPQ"
[4,] "NSC"
[5,] "1"

10
> res[[2]]

[,1] [,2] [,3] [,4]


[1,] 37.61 125.62 50.48 50.44

2.2
> x<-seq(-2,2,0.1)
> f<-function(x) if(x>=0 & x<=1) 2*x else 0.0
> fx<-lapply(x,f)
> fx<-unlist(fx)
Now plot.
> plot(x,fx)
2.0
1.5
1.0
fx

0.5
0.0

−2 −1 0 1 2

2.3
> x<-c(1:25)
> y<-x^2
Now plot.
> plot(x,y)

11
600
500
400
300
y

200
100
0

5 10 15 20 25

Financial Statistics
3.1 Expected value is given by

E(X) = (0)(0.7) + (1)(0.3) + (2)(0.1) = 0.5

Variance is given by

V ar(X) = (0 − 0.5)2 (0.7) + (1 − 0.5)2 (0.3) + (2 − 0.5)2 (0.1)

Standard Deviation is given by


p
SD(X) = V ar(X)

3.2 a) Expected Value is given by


Z Z 1
2 3 2
E(X) = xf (x)dx = x · 2x · dx = 1 −0=
0 3 3
Variance is given by
Z 1  2  2
2 2 2 2 2 4 2 1
V ar(X) = E(X ) − E (X) = x · 2x · dx − = 1 − =
0 3 4 3 18

12
b) Probability that server is operating less than 6 hours is
Z 0.25
P (X < 6/24) = P (X < 0.25) = 2x · dx = (0.25)2 − 02
0

c) Expected Value of profit is given by


4
E(Y ) = E(5X − 2) = 5E(X) − 2 =
3
Variance of profit is given by
25
V ar(Y ) = V ar(5X − 2) = 52 V ar(X) =
18

3.3 Expected Value E(X) is given by


Z Z ∞
1
E(X) = xf (x)dx = x · 2e−2x · dx =
0 2

Expected Value E(X 2 ) is given by


Z Z ∞
1
2 2
E(X ) = x f (x)dx = x2 · 2e−2x · dx =
0 2

Variance is given by
1 1 1
V ar(X) = E(X 2 ) − E 2 (X) = − =
2 4 4

3.4 Let the portfolio return be given by P = aX + bY + cZ. Variance of the


portfolio is

V ar(P ) = V ar(aX + bY + cZ)


= a2 V ar(X) + b2 V ar(Y ) + c2 V ar(Z)
+ 2abCov(X, Y ) + 2bcCov(Y, Z) + 2acCov(X, Z)

Time Series Analysis


If you have not already, install the TSA package with the command install.packages("TSA").
Load the TSA apckage with the command
> library(TSA)

13
6.1 Public Transport Boardings In this exercise we use our time series
analysis techniques to model and forecast the number of people who boarded
light rail trains and city buses in Denver, Colorado.
a) Load the boardings data with
> data(boardings)
and examine it’s structure with
> str(boardings)

Time-Series [1:68, 1:2] from 2001 to 2006: 12.5 12.6 12.5 12.5 12.4 ...
- attr(*, "dimnames")=List of 2
..$ : NULL
..$ : chr [1:2] "log.boardings" "log.price"

b) Extract the component we will analyze with


> boardings <- boardings[,1]
c) Plot the boardings data with
> plot(boardings,col='blue')
12.70
12.65
12.60
boardings

12.55
12.50
12.45
12.40

2001 2002 2003 2004 2005 2006

Time

and overlay the first letter of every month with

14
> plot(boardings,col='blue')
> points(boardings,x=time(boardings),pch=as.vector(season(boardings)))

12.70

O
12.65

N
S F
M
12.60

S J
O A
S O S F M A
AM M
boardings

S N
12.55

A O O
O
N M N FM J
F
N F AMJ A F J D
JA A J
12.50

AM N
M J J
A M A J
J J J
D
J D
12.45

J D
JJ
M
D
12.40

2001 2002 2003 2004 2005 2006

Time

d) Calculate and view the ACF with

> acf(as.vector(boardings))

15
Series as.vector(boardings)

0.4
0.2
ACF

0.0
−0.2

5 10 15

Lag

Is the yearly periodicity apparent? We observe the significant yearly lag at


12. What about MA lags? We observe significant lags at 1 and then again
at 5,6,7.

e) Calculate and view the PACF with


> pacf(as.vector(boardings))

16
Series as.vector(boardings)

0.4
0.2
Partial ACF

0.0
−0.2

5 10 15

Lag

Is the yearly periodicity apparent? We see the yearly lag significant at lag
12. What about AR lags? We see lags at 1 and 4.
f) Model the boardings data as ARIMA(4, 0, 3) × (1, 0, 0)12 . Note that the
Integrated part of the model is zero because we did not do any differencing for
trend removal. Which estimates are the most precise? For a precise estimate
we want to see a standard error that is small compared to the magnitude of
the coefficient. With this in mind, comparing the coefficient estimates and
standard errors we see the most precise are ar2, ar3, ar4, ma1, ma3, and
sar1. Which are the least precise? Comparing the coefficient estimates and
standard errors we see the least precise are ar1 and ma2.
> m.boardings <- arima(boardings,
+ order=c(4,0,3),
+ seasonal=list(order=c(1,0,0),period=12))
> m.boardings

Call:
arima(x = boardings, order = c(4, 0, 3), seasonal = list(order = c(1, 0, 0),
period = 12))

Coefficients:
ar1 ar2 ar3 ar4 ma1 ma2 ma3 sar1 intercept

17
0.108 0.595 0.69 -0.534 0.578 -0.059 -0.690 0.899 12.548
s.e. 0.388 0.253 0.14 0.286 0.399 0.443 0.312 0.040 0.079

sigma^2 estimated as 0.000544: log likelihood = 148.17, aic = -278.35

g) Examine the model residuals with a plot

> plot(residuals(m.boardings))
0.08
0.06
0.04
residuals(m.boardings)

0.02
0.00
−0.02
−0.06

2001 2002 2003 2004 2005 2006

Time

and a histogram

> hist(residuals(m.boardings), breaks = 50)

18
Histogram of residuals(m.boardings)

6
5
4
Frequency

3
2
1
0

−0.06 −0.04 −0.02 0.00 0.02 0.04 0.06 0.08

residuals(m.boardings)

Are the residuals well-behaved? Are they reasonably normal? Are they rea-
sonably independent? We conclude from the histogram that the residuals are
reasonably well-behaved and normal. From the plot, the residuals ’look’ inde-
pendent, but we want some more evidence. We examine the auto-correlation
function of residuals

> acf(residuals(m.boardings))

19
Series residuals(m.boardings)

0.2
0.1
ACF

0.0
−0.1
−0.2

0.2 0.4 0.6 0.8 1.0 1.2 1.4

Lag

and observe no significant lags. This supports our claim of independence.


h) Test the residuals for normality with
> shapiro.test(residuals(m.boardings))

Shapiro-Wilk normality test

data: residuals(m.boardings)
W = 0.9722, p-value = 0.132

Does the Shapiro-Wilks test accept or reject normality? Keeping in mind


that the Shapiro test has normality as the null hypothesis, we accept nor-
mality when large p-values are observed. With this in mind, we conclude
from the Shapiro test pval of 0.132 that the residuals are normal.
i) Predict 36 months ahead and plot the boardings data with predictions and
95% confidence interval with:
> plot(m.boardings,n1=c(2004,1),n.ahead=36,col='blue')

20
12.8
12.7
12.6
x

12.5

2004 2005 2006 2007 2008 2009

Time

6.2 CO2 Levels In this exercise we use our time series analysis techniques to
model and forcast the levels of CO2 in the atmosphere.
a) Load the data set with
> data(co2)
and plot it with

> plot(co2,col="blue")

21
380
375
370
co2

365
360
355
350

1994 1996 1998 2000 2002 2004

Time

b) Plot a partial window of the time series starting in 2001 with


> plot(window(co2,start=c(2001,1)),col="blue")

22
380
window(co2, start = c(2001, 1))

375
370
365

2001 2002 2003 2004 2005

Time

c) Define
> months = c('J','F','M','A','J','J','A','S','O','N','D')

and lay the points over the line with


> plot(window(co2,start=c(2001,1)),col="blue")
> points(window(co2, start = c(2001, 1)), pch = months)

23
AS
A AJ O
J J A
J S
380

M
J MA
window(co2, start = c(2001, 1))

AJ F
M
AJ M F
M F
JF A
375

J J N
J O
D F
J
S
370

D
D DJ
N
A N
365

N
O
O
S

2001 2002 2003 2004 2005

Time

d) Plot the ACF with


> acf(as.vector(co2),lag.max=48)

24
Series as.vector(co2)

0.8
0.6
0.4
ACF

0.2
0.0
−0.2

0 10 20 30 40

Lag

and observe what appears to be a 12-month cycle.


e) Plot the first difference with
> plot(diff(co2))

25
6
4
2
0
diff(co2)

−2
−4
−6
−8

1994 1996 1998 2000 2002 2004

Time

and again observe the yearly cycle.


f) Calculate the first and seasonal difference with
> series <- diff(diff(co2),lag=12)

and plot it with


> plot(series, ylab='First and Seasonal Difference')
g) Plot the ACF and PACF of the first and seasonal differenced series with

> acf(as.vector(series))

26
Series as.vector(series)

0.2
0.0
ACF

−0.2
−0.4

5 10 15 20

Lag

and
> pacf(as.vector(series))

27
Series as.vector(series)

0.2
0.0
Partial ACF

−0.2
−0.4

5 10 15 20

Lag

What order of ARIMA model do you recommend? Having differenced at lag


1 and then at lag 12, we examine the ACF and PACF of the resulting series
and observe 2 AR lags in the PACF and 1 MA lag in the ACF. This yields
an ARIMA(2, 1, 1) × (0, 1, 1)12 model.

h) We difference at lag 1 then at lag 12, and observe significant lags up to


lag 2 on the PACF and lag 1 on the ACF. So we fit and examine an
ARIMA(2, 1, 1) × (0, 1, 1)12 model with
> m.co2 <- arima(co2,
+ order=c(2,1,1),
+ seasonal=list(order=c(0,1,1),period=12))
> m.co2

Call:
arima(x = co2, order = c(2, 1, 1), seasonal = list(order = c(0, 1, 1), period = 12))

Coefficients:
ar1 ar2 ma1 sma1
0.388 0.270 -1.000 -0.804
s.e. 0.089 0.093 0.061 0.116

sigma^2 estimated as 0.516: log likelihood = -138.35, aic = 284.71

28
i) Examine model diagnostics with
> tsdiag(m.co2)
Standardized Residuals

0 1 2 3
−2

1996 1998 2000 2002 2004

Time
ACF of Residuals

0.1
−0.1

5 10 15 20

Lag
0.8
P−values

0.4
0.0

5 10 15 20

Number of lags

and test for normality with


> shapiro.test(residuals(m.co2))

Shapiro-Wilk normality test

data: residuals(m.co2)
W = 0.9827, p-value = 0.0927

Describe the residuals. Are they well-behaved? Are they normal? Residuals
are reasonably well behaved and the Shapiro test affirms normality at the
0.05 level (though not at 0.10 level).
j) Plot predictions 48 months ahead starting in 2004 with 95% confidence in-
tervals with the command
> plot(m.co2,n1=c(2004,1),n.ahead=48,col='blue')

29
390
385
380
x

375
370

2004 2005 2006 2007 2008 2009

Time

6.3 Exchange Rates In this exercise we will analyze the volatility of the the
US dollar to Hong Kong dollar exchange rate.
a) Load the US dollar / Hong Kong dollar data frame.
> data(usd.hkd)
b) Examine the structure of the data frame.
> str(usd.hkd)

'data.frame': 431 obs. of 6 variables:


$ r : num -0.0305 -0.0115 0.0247 0.0428 0.1078 ...
$ v : num 0.000681 0.000751 0.000565 0.000622 0.001 ...
$ hkrate : num -0.03087 -0.00643 0.02573 0.03858 0.10152 ...
$ outlier1: int 0 0 0 0 0 0 0 0 0 0 ...
$ outlier2: int 0 0 0 0 0 0 0 0 0 0 ...
$ day : int 1 2 3 4 5 6 7 8 9 10 ...

where r is daily returns of USD/HKD exchange rates, v is estimated con-


ditional variances based on an AR(1)+GARCH(3,1) model, hkrate is daily
USD/HKD exchange rates, outlier1 is dummy variable of day 203, corre-
sponding to July 22 2005, outlier2 is dummy variable of day 290, another
possible outlier, and day is calendar day.

30
c) Extract the hkrate component and make it a time series.
> us.hk<-ts(usd.hkd$hkrate)
d) Plot the time series.
> plot(us.hk)
0.15
0.10
0.05
us.hk

0.00
−0.15 −0.10 −0.05

0 100 200 300 400

Time

e) Fit a GARCH(1,1) to the time series:


> m2 <- garch(x = us.hk - mean(us.hk), order = c(1,1), reltol = 1e-6)

***** ESTIMATION WITH ANALYTICAL GRADIENT *****

I INITIAL X(I) D(I)

1 6.760625e-04 1.000e+00
2 5.000000e-02 1.000e+00
3 5.000000e-02 1.000e+00

IT NF F RELDF PRELDF RELDX STPPAR D*STEP NPRELDF


0 1 -1.352e+03
1 6 -1.354e+03 1.42e-03 2.83e-03 1.0e-03 3.8e+08 1.0e-04 5.42e+05

31
2 7 -1.354e+03 2.67e-05 3.18e-05 9.9e-04 2.0e+00 1.0e-04 1.60e+01
3 13 -1.363e+03 6.82e-03 1.10e-02 4.2e-01 2.0e+00 7.2e-02 1.60e+01
4 14 -1.365e+03 1.63e-03 1.95e-03 2.9e-01 2.0e+00 7.2e-02 3.34e-01
5 16 -1.370e+03 3.23e-03 4.34e-03 4.4e-01 2.0e+00 2.0e-01 2.74e-01
6 18 -1.377e+03 5.28e-03 4.52e-03 2.3e-01 9.2e-01 2.0e-01 2.65e-02
7 20 -1.378e+03 1.16e-03 1.14e-03 3.9e-02 2.0e+00 4.3e-02 1.61e+01
8 22 -1.382e+03 2.23e-03 2.33e-03 7.0e-02 2.0e+00 8.5e-02 7.87e+01
9 24 -1.392e+03 7.33e-03 6.04e-03 1.2e-01 2.0e+00 1.7e-01 1.25e+00
10 31 -1.392e+03 1.61e-04 5.86e-04 2.6e-06 7.8e+00 4.3e-06 1.56e-02
11 32 -1.392e+03 3.75e-05 3.01e-05 2.5e-06 2.0e+00 4.3e-06 2.54e-02
12 33 -1.392e+03 1.39e-06 1.07e-06 2.6e-06 2.0e+00 4.3e-06 3.28e-02
13 34 -1.392e+03 5.56e-08 7.96e-08 2.6e-06 2.0e+00 4.3e-06 3.22e-02
14 40 -1.392e+03 1.04e-04 1.52e-04 5.3e-03 2.0e+00 8.9e-03 3.21e-02
15 41 -1.392e+03 8.28e-06 7.43e-06 1.0e-03 0.0e+00 2.2e-03 7.43e-06
16 42 -1.392e+03 2.00e-06 1.32e-06 8.7e-04 0.0e+00 2.0e-03 1.32e-06
17 44 -1.392e+03 9.74e-06 1.05e-05 5.0e-03 7.2e-01 1.1e-02 1.56e-05
18 45 -1.392e+03 1.28e-05 1.34e-05 1.0e-02 4.1e-01 2.1e-02 1.50e-05
19 46 -1.392e+03 4.89e-06 7.79e-06 1.0e-02 3.9e-01 2.1e-02 8.62e-06
20 47 -1.392e+03 -1.96e-07 7.63e-07 3.6e-03 0.0e+00 7.2e-03 7.63e-07

***** RELATIVE FUNCTION CONVERGENCE *****

FUNCTION -1.392241e+03 RELDX 3.630e-03


FUNC. EVALS 47 GRAD. EVALS 20
PRELDF 7.629e-07 NPRELDF 7.629e-07

I FINAL X(I) D(I) G(I)

1 1.754761e-05 1.000e+00 -7.313e+03


2 2.465986e-01 1.000e+00 -1.399e+00
3 7.965567e-01 1.000e+00 -2.670e+00

f) Examine the model.


> summary(m2)

Call:
garch(x = us.hk - mean(us.hk), order = c(1, 1), reltol = 1e-06)

Model:
GARCH(1,1)

Residuals:
Min 1Q Median 3Q Max
-7.7802 -0.3959 0.0251 0.4435 4.4744

Coefficient(s):

32
Estimate Std. Error t value Pr(>|t|)
a0 1.75e-05 5.28e-06 3.32 0.00089 ***
a1 2.47e-01 2.46e-02 10.01 < 2e-16 ***
b1 7.97e-01 1.51e-02 52.82 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Diagnostic Tests:
Jarque Bera Test

data: Residuals
X-squared = 2530, df = 2, p-value <2e-16

Box-Ljung test

data: Squared.Residuals
X-squared = 1.169, df = 1, p-value = 0.28

g) Plot the fitted model conditional variance:

> plot((fitted(m2)[,1])^2,type='l',ylab='conditional variance',xlab='time',col='blue')


0.012
0.010
conditional variance

0.008
0.006
0.004
0.002
0.000

0 100 200 300 400

time

33
h) Plot the model residuals with the command
> plot(residuals(m2),col="blue",main="Residuals")

Residuals
4
2
0
residuals(m2)

−2
−4
−6
−8

0 100 200 300 400

Time

Do they look normal? Why or why not? The residuals do not appear normal,
with several of the negative values being quite extreme. We expect the
Shapiro test to reject normality.

i) Generate the histogram of the model residuals with the command


> hist(residuals(m2), breaks = 50)

34
Histogram of residuals(m2)

60
Frequency

40
20
0

−8 −6 −4 −2 0 2 4

residuals(m2)

Is it similar to a normal distribution? Why or why not? The left tail of the
histogram is too thick, indicating negative values that are too extreme to be
normally distributed.
j) Test the model residuals for normality with the command
> shapiro.test(residuals(m2))

Shapiro-Wilk normality test

data: residuals(m2)
W = 0.855, p-value <2e-16

Is normality accepted or rejected? Does this agree with your thoughts on the
residual plot and histogram above? We see that with a Shapiro test pval of
nearly zero, the null hypothesis of normality is strongly rejected. This agrees
with our thoughts on the plot and histogram of the residuals.

1 Black-Scholes Model and Option-Implied Volatil-


ity
> setwd("/Volumes/cruzer/Projects/financial analytics/solutions/")
> taro<-read.csv("TARO.csv")

35
> head(taro)

X.1 X UnderlyingSymbol UnderlyingPrice Exchange OptionRoot OptionExt


1 1 1 TARO 32.7 * QTT020420C00025000 NA
2 2 2 TARO 32.7 * QTT020420P00025000 NA
3 3 3 TARO 32.7 * QTT020420C00030000 NA
4 4 4 TARO 32.7 * QTT020420P00030000 NA
5 5 5 TARO 32.7 * QTT020420C00032500 NA
6 6 6 TARO 32.7 * QTT020420P00032500 NA
Type Expiration DataDate Strike Last Bid Ask Volume OpenInterest
1 call 2002-04-20 2002-03-22 25.0 7.00 7.20 8.40 0 43
2 put 2002-04-20 2002-03-22 25.0 0.95 0.00 0.50 0 90
3 call 2002-04-20 2002-03-22 30.0 3.30 2.90 3.90 1 181
4 put 2002-04-20 2002-03-22 30.0 1.20 0.45 0.95 0 334
5 call 2002-04-20 2002-03-22 32.5 1.70 1.50 2.15 4 946
6 put 2002-04-20 2002-03-22 32.5 2.40 1.25 1.90 0 34
T1OpenInterest
1 43
2 90
3 181
4 334
5 948
6 34

> taro<-taro[c(3,4,6,8,9,10,11,12,13,14)]
> taro$Expiration<-as.Date(taro$Expiration)
> taro$DataDate<-as.Date(taro$DataDate)
> taro$Price<-(taro$Bid+taro$Ask)/2
> taro$Maturity<-as.double(taro$Expiration-taro$DataDate)/365
> head(taro)

UnderlyingSymbol UnderlyingPrice OptionRoot Type Expiration


1 TARO 32.7 QTT020420C00025000 call 2002-04-20
2 TARO 32.7 QTT020420P00025000 put 2002-04-20
3 TARO 32.7 QTT020420C00030000 call 2002-04-20
4 TARO 32.7 QTT020420P00030000 put 2002-04-20
5 TARO 32.7 QTT020420C00032500 call 2002-04-20
6 TARO 32.7 QTT020420P00032500 put 2002-04-20
DataDate Strike Last Bid Ask Price Maturity
1 2002-03-22 25.0 7.00 7.20 8.40 7.800 0.0794521
2 2002-03-22 25.0 0.95 0.00 0.50 0.250 0.0794521
3 2002-03-22 30.0 3.30 2.90 3.90 3.400 0.0794521
4 2002-03-22 30.0 1.20 0.45 0.95 0.700 0.0794521
5 2002-03-22 32.5 1.70 1.50 2.15 1.825 0.0794521
6 2002-03-22 32.5 2.40 1.25 1.90 1.575 0.0794521

> subset(taro,Type=='call' & Expiration=='2002-07-20' & DataDate=='2002-03-25' & Strike==35.

36
UnderlyingSymbol UnderlyingPrice OptionRoot Type Expiration
107 TARO 32.88 QTT020720C00035000 call 2002-07-20
DataDate Strike Last Bid Ask Price Maturity
107 2002-03-25 35 2.5 2.4 3.2 2.8 0.320548

> optsub<-subset(taro,Type=='call'
+ & Expiration=='2002-07-20'
+ & DataDate=='2002-03-25'
+ & Strike==35.0)
> optsub$Price

[1] 2.8

> bs<-function(type,S,K,sigma,t,r){
+ d1 <- (log(S/K) + (r+(sigma^2)/2)*t) / (sigma*sqrt(t))
+ d2 <- (log(S/K) + (r-(sigma^2)/2)*t) / (sigma*sqrt(t))
+ if (type=='call') val <- pnorm(d1)*S - pnorm(d2)*K*exp(-r*t)
+ else if (type=='put') val <- pnorm(-d2)*K*exp(-r*t) - pnorm(-d1)*S
+ val
+ }

> secantIV<-function(type,V,S,K,sigma0,sigma1,t,r){
+ newSigma <- sigma0 - (bs(type,S,K,sigma0,t,r)-V)*(sigma0-sigma1)/
+ (bs(type,S,K,sigma0,t,r) - bs(type,S,K,sigma1,t,r))
+ if( abs(newSigma)==Inf ) return(0.0)
+ if( abs(newSigma - sigma0) < .0001 ) return(newSigma)
+ else return(secantIV(type,V,S,K,newSigma,sigma0,t,r))
+ }

> secantIV(optsub$Type,
+ optsub$Price,
+ optsub$UnderlyingPrice,
+ optsub$Strike,0.5,1,
+ optsub$Maturity,0.05)

[1] 0.465278

37

You might also like