0% found this document useful (0 votes)
29 views9 pages

Assignment 3

The document discusses estimating autocovariance functions from simulated time series data using ARIMA models. It compares empirical and theoretical autocovariance values at different lags, examines the distribution and variance of estimators, and investigates the autocorrelation and partial autocorrelation functions of AR and MA processes.

Uploaded by

Anuj Said
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views9 pages

Assignment 3

The document discusses estimating autocovariance functions from simulated time series data using ARIMA models. It compares empirical and theoretical autocovariance values at different lags, examines the distribution and variance of estimators, and investigates the autocorrelation and partial autocorrelation functions of AR and MA processes.

Uploaded by

Anuj Said
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

N=1000;

R=200;
c1=0.6;
Mdl = arima ('Constant',0,'MA',{c1},'MALags',[1],'Variance',1);
Y = simulate (Mdl ,N, 'NumPaths' ,R);
sigma_vv_0=0; % Sum of ACVF(lag==0) for all realisations
sigma_vv_1=0; % Sum of ACVF(lag==1) for all realisations
for j=1:R
sum=0;
sum1=0;
%autocorr can be used instead of for loops
for i=1:N-1
sum=sum+(Y(i,j)*Y(i+1,j));
end
for i=1:N
sum1=sum1+(Y(i,j)*Y(i,j));
end
%Adding the ACVF(lag==0) for j th realisation
sigma_vv_1=sigma_vv_1+sum/(N-1);
%Adding the ACVF(lag==1) for j th realisation
sigma_vv_0=sigma_vv_0+sum1/(N);
end
%
display(["Empirical value of autocovariance for lag 1 is "+sigma_vv_1/(R)]);

"Empirical value of autocovariance for lag 1 is 0.59996"

display(["Theoretical value of autocovariance for lag 1 is "+c1]);

"Theoretical value of autocovariance for lag 1 is 0.6"

display(["Empirical value of autocovariance for lag 0 is "+sigma_vv_0/(R)]);

"Empirical value of autocovariance for lag 0 is 1.3639"

display(["Theoretical value of autocovariance for lag 0 is "+(1+c1*c1)]);

"Theoretical value of autocovariance for lag 0 is 1.36"

Y(: ,1);
autocorr (Y(: ,1));

1
% v[k] = e[k] + c1*e[k − 1] + c2*e[k − 2] + c3*e[k − 3]
N=100000;
R=2;
c1=0.6;c2=0.1;c3=0;
c=[c1,c2,c3];
Mdl = arima (Constant=0,MA=c,Variance=1);
Y = simulate (Mdl ,N, 'NumPaths' ,R);
% ACF for MA(2) process
autocorr (Y(: ,1));

2
c1=0.6;c2=0.1;c3=0.3;
c=[c1,c2,c3];
Mdl = arima (Constant=0,MA=c,Variance=1);
Y = simulate (Mdl ,N, 'NumPaths' ,R);

% ACF for MA(3) process


autocorr (Y(: ,1));

3
Conclusion : From the above observations for it is clear that for process, ACF achieves zero values for
lag greater than M.

N=1000;
R=200;
c1=0.6;
Mdl = arima ('Constant',0,'MA',{c1},'MALags',[1],'Variance',1);
Y = simulate (Mdl ,N, 'NumPaths' ,R);
sigma_vv_1=zeros(N-1,1);
for i=1:N-1
%Calculating auto-covariance for lag 1
%for each i th values across all realisations
cov_=cov(Y(i,:),Y(i+1,:));
sigma_vv_1(i)=cov_(1,2);
end
%Distribution for ACVF (lag==1) represented in form of histogram
histogram(sigma_vv_1);
xline(mean(sigma_vv_1),'-r',{'best estimation',mean(sigma_vv_1)});
xl=xline(c1,'-g',{'truth',c1});
xl.LabelHorizontalAlignment = 'left';

4
display(["Variance of our ACVF estimator for lag 1 is : "+var(sigma_vv_1)]);

"Variance of our ACVF estimator for lag 1 is : 0.011546"

sigma_vv_2=zeros(N-1,1);
for i=1:N-2
%Calculating auto-covariance for lag 2
%for each i th values across all realisations
cov_=cov(Y(i,:),Y(i+2,:));
sigma_vv_2(i)=cov_(1,2);
end
%Distribution for ACVF (lag==2) represented in form of histogram
histogram(sigma_vv_2);
xline(mean(sigma_vv_2),'-r',{'best estimation',mean(sigma_vv_2)});
xl=xline(0,'-g',{'truth','0'});
xl.LabelHorizontalAlignment = 'left';

5
display(["Variance of our ACVF estimator for lag 2 is : "+var(sigma_vv_2)]);

"Variance of our ACVF estimator for lag 2 is : 0.010767"

N=10000;
z=ones(21,1);
error=0;
d1=-0.6;
Mdl = arima ('Constant',0,'AR',{-d1},'Variance',1);
Y = simulate (Mdl ,N, 'NumPaths' ,1);
[acf,lags]=autocorr(Y);
for i=1:21
%Theoretical ACF for lag (i-1)
z(i)=(-d1)^(i-1);
error=error+(z(i)-acf(i))^2;
end
display(["Mean square error in theoretical and empirical ACF is : "+error/21]);

"Mean square error in theoretical and empirical ACF is : 0.00017094"

scatter(lags,acf,'blue');
hold on
scatter(lags,z,'red');

6
hold off
xlabel("Lag")
ylabel("Sample Autocorrelation");
legend("Empirical","Theoretical");

From the above graph it is clear that for process, PACF is positive (non-zero) for lag less than equal to 1.

Hence the for process, it can be said that PACF takes non-zero values for lag less than equal to M.

[pacf,lags]=parcorr(Y);
display(["Theoretical Partial ACF for lag 2 is "+(acf(3)-acf(2)^2)/(1-acf(2)^2)+"
Empirical Partial ACF for lag 2 is "+pacf(3)]);

"Theoretical Partial ACF for lag 2 is 0.0098241 Empirical Partial ACF for lag 2 is 0.0097593"

display(["Error in Theoretical Partial ACF and Empirical Partial ACF for lag 2 is
"+((acf(3)-acf(2)^2)/(1-acf(2)^2)-pacf(3))]);

"Error in Theoretical Partial ACF and Empirical Partial ACF for lag 2 is 6.4794e-05"

N=200;
k=1:N;
f=.24;

7
snr=[20,10,1,.1];
y_hat=cos(2*pi*f.*k);
for i=1:4
%Adding noise
y=awgn(y_hat,snr(i));
%Operating fft on noisy signal
freq_y=(abs(fft(y)).^2)./N;
[t,l]=autocorr(y,NumLags=N-1);
%Operating fft on ACF values obtained for lags 1 to N-1
freq_acf=(abs(fft(t)).^2)./N;
%Plotting results
% figure(i);
subplot(4,2,2*i-1); plot(k,freq_y);
title("Power spectrum Periodogram (SNR = " + num2str(snr(i)) +" )");
ax = gca;
ax.TitleFontSizeMultiplier = .9;
xlabel("Frequency (Hz)"); ylabel("Power");
ylim([0 9^(i-2)]);
subplot(4,2,2*i); plot(k,freq_acf);
title("ACF Periodogram (SNR = " + num2str(snr(i)) +" )");
xlabel("Frequency (Hz)"); ylabel("Power");
ylim([0 9^(i-2)]);
end

Detecting periodicities using both the ACF method is better than power spectrum method.

8
From the above graphs it is clear that the ACF method clears out the power of noisy frequencies which
eventually leads to detecting the periodicity of a signal.

The reason for the above happening lies in fact that the variation of ACF only consists of sinusoidal term while
the analysis of signal involves error term.

Reasoning in gives the testimonies of the above fact.

You might also like