Hypothesis Testing
Hypothesis Testing
Hypothesis Testing
74558716191
4R9ca4
Course: Econometrics I
• Content Covered:
Introduction
What is Econometrics? Estimation of the OLS estimators.
Methodology of Econometrics. Algebraic properties of OLS estimators.
Different types of data and pros and cons SST, SSE and SSR. Derivation and
of using those data. interpretation of of R square.
Regression and correlation. Impact of change in unit of measurement
in dependent and independent variable.
Chapter2: SLR Linear and non-linear regression.
Simple linear regression. Assumptions of SLR [ emphasize the
assumption 4 and 5]
Understanding of PRF and SRF.
Proof of unbiasedness of OLS estimator
OLS and relevance of using OLS. using assumption 1 to 4.
Distinguish between Estimator and Derivation of the variance of the OLS
estimate. estimator. Proof of unbiasedness of the
variance of the OLS estimators using
assumption 1 to 5.
Chapter 3: Multiple Linear regression:
Formation and relevance of multiple linear regression and interpretation of the
estimators.
‘Partialling out’ interpretation.
R square and its interpretation.
Assumptions of MLR [ discussion of MLR 3].
Proof of theorems 3.1 to 3.4 (Gauss Markov) using the assumptions.
To be Covered:
Hypothesis Testing and Prediction:
Test of hypothesis: restricted vs. non-restricted regression, Neyman-Pearson
methodology; point and interval estimations; Two approaches to testing
hypothesis: confidence interval approach, test of significance approach,
prediction and forecasting: prediction interval, prediction variance.
Multicollinearity:
The nature of multicollinearity, estimation in the presence of multicollinearity,
detection of multicollinearity and its remedial measures.
Heteroscedasticity:
The nature of heteroscedasticity, consequence of using OLS in the presence of
heteroscedastic disturbances, detection of heteroscedasticity and its remedial
measures, the method of weighted least squares (WLS).
Autocorrelation:
The nature of the problem of autocorrelation, consequence of using OLS in the
presence of autocorrelated disturbances, detection of autocorrelation and its
remedial measures, the method of generalized last squares (GLS).
where inft is the annual inflation rate and unemt is the unemployment rate. This
form of the Phillips curve assumes a constant natural rate of unemployment and
constant inflationary expectations, and it can be used to study the
contemporaneous tradeoff between inflation and unemployment.
A static model with multiple variables:
Finite Distributed lag models
In a finite distributed lag (FDL) model, we allow one or more variables to affect y
with a lag. For example, for annual observations, consider the model
where gfrt is the general fertility nand pet is the real dollar value of the personal
tax exemption. Equation (10.4) is an example of the model
To illustrate the confidence interval approach, once again we revert to our wages-education
example. From the regression results given in Eq. (3.6.1), we know that the slope coefficient
is 0.7240. Suppose we postulate that
• Following this rule our hypothetical example null hypothesis that H0: β2 =
0.5 clearly lies outside of the 95% confidence interval . So reject the
null hypothesis with 95 % confidence.
One-Sided or One-Tail Test
Sometimes we have a strong a priori or theoretical expectation (or expectations
based on some previous empirical work) that the alternative hypothesis is one-
sided or unidirectional rather than two-sided, as just discussed. Thus, for our
wages-education example, one could postulate that
Perhaps economic theory or prior empirical work suggests that the slope is greater
than 0.5. Although the procedure to test this hypothesis can be easily derived
from Eq. (5.3.5), the actual mechanics are better explained in terms of the test-
of-significance approach discussed next
Hypothesis Testing: The Test-of- Significance Approach
A. Fisher and jointly by Neyman and Pearson.10 Broadly speaking, a test of significance is a
procedure bywhich sample results are used to verify the truth or falsity of a null hypothesis.
The key idea behind tests of significance is that of a test statistic (estimator) and the sampling
distribution of such a statistic under the null hypothesis. The decision to accept or reject H0 is
made on the basis of the value of the test statistic obtained from the data at hand.
Hypothesis Testing: The Test-of- Significance Approach
Recall that the t variable can be
defined as
• Since we use the t distribution, the preceding testing procedure is called appropriately the
t test. In the language of significance tests, a statistic is said to be statistically significant if
the value of the test statistic lies in the critical region. In this case the null hypothesis is
rejected. By the same token, a test is said to be statistically insignificant if the value of the
test statistic lies in the acceptance region. In this situation, the null hypothesis is not
rejected. In our example, the t test is significant and hence we reject the null hypothesis.
Hypothesis Testing: The Test-of-Significance Approach (One Tail Test)
Suppose that
H0: β2 ≤ 0.5 and H1: β2 > 0.5.
Although H1 is still a composite hypothesis, it is now one-sided. To test this hypothesis, we use the
one-tail test.
The test procedure is the same as before except that the upper confidence limit or critical value now
corresponds to
tα= t.05,
that is, the 5 percent level. From t table the critical value of t at 5 % level of significance and at 11 df
is obtained as
tα= 1.796
Clearly t=3.2 is greater than critical t. so reject null hypothesis.
Hypothesis Testing: The Test-of-Significance Approach (One Tail Test)
Hypothesis Testing: The Test-of-Significance Approach
Summary