0% found this document useful (0 votes)
89 views6 pages

Asmt 4

1) The document discusses using different estimation methods (quasi-Newton, EM algorithm, Gibbs sampling) to estimate parameters of an ARMA time series model using simulated data. 2) Estimation using quasi-Newton and EM algorithms recovered some parameters accurately but not others, and produced unreliable estimates of uncertainty. 3) Gibbs sampling produced excellent estimates of the latent time series but very poor estimates of the non-state parameters, indicating a limitation in the implementation.

Uploaded by

Garth Baughman
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views6 pages

Asmt 4

1) The document discusses using different estimation methods (quasi-Newton, EM algorithm, Gibbs sampling) to estimate parameters of an ARMA time series model using simulated data. 2) Estimation using quasi-Newton and EM algorithms recovered some parameters accurately but not others, and produced unreliable estimates of uncertainty. 3) Gibbs sampling produced excellent estimates of the latent time series but very poor estimates of the non-state parameters, indicating a limitation in the implementation.

Uploaded by

Garth Baughman
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

ECON 706 HW 4

GARTH BAUGHMAN

Exercise (1). Is the process stationary? Invertible? Response to 1. Yes, the process is stationary because || < 1. Yes, the system is invertible. i i i i i Write (yt t )/ = ft so that (yt i )/ = (yt1 i )/ or yt = (yt1 i + t so t t1 t1 this is an arma(1,1) with parameters less than 1 and is, thus, invertible. Exercise (2). Simulate a sample of length 1000 using iid N (0, 1) shocks and = (1, .7, 1, 1) and = .9. Plot and discuss. Response to 2. We plot the values for y 1 , y 2 , y 3 and f . This is an obvious AR system. The ys move closely with eachother.

Figure 1. Y-1 Exercise (3). Estimate the model by Gaussian ML using the Kalman lter. For the optimization, use and compare (a) a quasi-Newton algorithm like DFP and (b) the EM algorithm.
1

GARTH BAUGHMAN

Figure 2. Y-2 Compare standard errors based on the Hessian and sandwich estimators. Conditional upon the estimated parameters, calculate and plot the smooted (not ltered) extraction of the latent state sequaence, together with appropriate condence bands and compare it to the true state sequence. Response to 3. So, with regard to part (a), code can be found in q2a.m. When trying to maximize with respect to a fully unconstrained model i.e. one where covariance matrices were unconstrianed maximization failed. With some starting values I would obtain convergence but with poor prameter values. With others I would not obtain convergence in any reasonable number of iterations. So we restricted to the case of uncorrelated errors and maximized with respect to 2 , 3 , and . For these we obtained estimates of .61, .99, and .81. These are all low. Its not clear why this occurs. And yes, these values are sensitive to starting values. If one starts with the true parameters then values much closer to the true value are obtained but that is a somewhat dishonest test as in reality one would not know the true values. The negative inverse of the hessian gives very small variances which is resonable with such a large sample. The values are .48 .14 .19 103 .14 .50 .005 .19 .005 .44

ECON 706 HW 4

Figure 3. Y-3 From the sandwich estimator, which is H 1 (GG )H 1 where H is the hessian and G is the gradient, we get .32 .02 .45 1013 .02 .0006 .02 .45 .02 .63 which is many orders of magnitude less than the hessian estimate and is proabably calculated erroneously. As for the EM estimation, code can be found in q2b.m. This procedure was more stable in the sense that it was able to feasibly estimate a fully unconstrained error covariance matrix for y. This algorithm estimates 2 = .71, 3 = 1.11, = .81, the state error variance at 8.6. and the measurement covariance matrix at .67 .27 .39 .27 .86 .33 .39 .33 .54 Obviously, the algorithm did a poor job of calculating the covariances and also did poorly in calculating but was better at than the quasi-newton algorithm. Calculating the hessian, however, is more dicult as now we have a system with 12 parameters and so the hessian has 144 elements. Indeed, the estimated hessian is near

GARTH BAUGHMAN

Figure 4. f singular with Rcond on the order of 1027 . Thus, calculating the full covariance of the estimated parameters is not feasible. Turning to plotting the smoothed values of the latent variable f , we present that below in gure 5. While its somewhat dicult to see in the gure, Exercise (4). Perform a Bayesian analysis of the model, treating the state vector as an additional parameter to be estimated, using MCMC methods (specically, Gibbs sampling) to explore the posterior. Provide a thorough characterization of the non-state parameter posterior distributions. Plot the estimated (posterior mean) state sequence together with appropriate posterior coverage interval, and compare to your results in 3. Provide detailed yet concise discussion. Response to 4. The estimation proceeds as follows: First, we lter. Then, using the ltered j values and the recursive equations on pages 16, 17 of the bayesian notes, we draw t . Next, j j given we use OLS of t on t1 to get s2 and use that to draw Q which follows (T 1)s2 /2 (T 1). Then, given Q, draw from a normal with mean from OLS and variance Q. Draw H and similarly from distirbutions whose parameters are given by OLS of y on . Iterate (we go 500 times) and then accumulate (we accumulate 200 points). Use this population to estimate various parameters of interest. While in principle the system seems sound, my implementation utterly failed to adequately reproduce the data. For example, the

ECON 706 HW 4

Figure 5. Smoothed f mean of the posterior sample for 2 was .02; for 3 it was .03; and for it was .01. Whats worse the maximum sampled values for these parameters were .16, .30 and .09. Obviously there was a serious limitation in my implementation. The code can be found in q4.m. Thus, I dont discuss the distribtions of the obviously atrocious estimated parameters. The distribution for the latent variable, however, was excellent. The true value was within the estimated 95% condence bands .955% of the time. A plot can be seen here in gure ?? with red being the lower bound, turqouise the upper, blue the true value and green the estimated value. They do not track exactly but that would not be expected. It is the error bounds which are impressive.

GARTH BAUGHMAN

Figure 6. Gibs estimation of latent variable

You might also like