0% found this document useful (0 votes)
6 views2 pages

Definitions

The document provides an overview of time series analysis, including definitions and techniques such as stationarity, differencing, and various statistical tests like the Durbin-Watson and Dickey-Fuller tests. It also explains the Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) for identifying model components, as well as Moving Average (MA) and Auto-Regression (AR) models for forecasting. These concepts are essential for analyzing and predicting time-dependent data.

Uploaded by

bunsen102003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views2 pages

Definitions

The document provides an overview of time series analysis, including definitions and techniques such as stationarity, differencing, and various statistical tests like the Durbin-Watson and Dickey-Fuller tests. It also explains the Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) for identifying model components, as well as Moving Average (MA) and Auto-Regression (AR) models for forecasting. These concepts are essential for analyzing and predicting time-dependent data.

Uploaded by

bunsen102003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

1.

Time Series Data


A time series is a sequence of observations recorded at specific time intervals
(e.g., daily, monthly, quarterly). Common examples include stock prices,
temperature readings, or sales figures over time.

2. Stationarity
A time series is stationary if its statistical properties such as mean, variance, and
autocorrelation remain constant over time. Stationary series are predictable and
suitable for many forecasting models like ARIMA.

3. Differencing
Differencing is a technique used to transform a non-stationary time series into a
stationary one. It involves subtracting the previous observation from the current
observation to remove trends or seasonality.
Formula:
∇ X t =X t −X t −1

Here, ∇ X t is the differenced value at time t, and X t is the original value at time t.

4. Durbin-Watson Test
The Durbin-Watson test detects the presence of autocorrelation in the residuals
of a regression model.
 Value ≈ 2 indicates no autocorrelation
 Value < 2 indicates positive autocorrelation
 Value > 2 indicates negative autocorrelation

5. Dickey-Fuller Test
The Dickey-Fuller test is used to test whether a unit root is present in an
autoregressive model.
 Null hypothesis (H₀): The series has a unit root (non-stationary)
 Alternative hypothesis (H₁): The series is stationary

6. Augmented Dickey-Fuller (ADF) Test


An extension of the Dickey-Fuller test that includes lagged differences of the
series to account for higher-order autocorrelation.
 ADF statistic and p-value help determine whether to reject the null
hypothesis of non-stationarity.
7. ACF (Autocorrelation Function)
ACF measures the correlation between a time series and its lagged versions over
different lags. It helps identify MA (Moving Average) terms in a time series model.

8. PACF (Partial Autocorrelation Function)


PACF measures the correlation between a time series and its lagged versions,
after removing the effect of shorter lags. It helps identify AR (Auto-Regressive)
terms in a model.

9. Moving Average (MA)


A model that predicts future values based on past forecast errors. The MA(q)
model uses the previous q forecast errors to model the current value.

10. Auto-Regression (AR)


A model that predicts future values based on past observations. The AR(p) model
uses the previous p values of the series to forecast the current value.

You might also like