0% found this document useful (0 votes)
15 views11 pages

Multiple - Regression - Analysis - Mu Final

This document presents a detailed overview of Multiple Regression Analysis, focusing on hypothesis testing related to individual regression coefficients and overall model significance. It discusses the use of t-tests and F-tests to evaluate the significance of predictors and the relationship between R square and the F-statistic. Additionally, it covers the incremental contribution of explanatory variables and the testing of equality between regression coefficients.

Uploaded by

mosammad munira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views11 pages

Multiple - Regression - Analysis - Mu Final

This document presents a detailed overview of Multiple Regression Analysis, focusing on hypothesis testing related to individual regression coefficients and overall model significance. It discusses the use of t-tests and F-tests to evaluate the significance of predictors and the relationship between R square and the F-statistic. Additionally, it covers the incremental contribution of explanatory variables and the testing of equality between regression coefficients.

Uploaded by

mosammad munira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Welcome To Our Presentation

Presentations Topic: Multiple Regression Analysis (The Problem of


Inference)

Group Number-07

Group Profile:

Serial No. Name ID


01 Fatema Akter 19 FIN 047
02 Mehjabin Zaman 19 FIN 045
03 Nowrin Mahajabin Awishe 19 FIN 040
04 Md. Amir Hossen 19 FIN 053
CHAPTER-08

Multiple Regression Analysis: The Problem of Inference

Hypothesis Testing in Multiple Regression: General Comments

Hypothesis testing in multiple regression is a fundamental step to evaluate the significance of


predictors and assess the overall fit of the model. Here are general comments and considerations:

1. Testing hypotheses about an individual partial regression coefficient

2. Testing the overall significance of the estimated multiple regression model, that is, finding out
if all the partial slope coefficients are simultaneously equal to zero

3. Testing that two or more coefficients are equal to one another

4. Testing that the partial regression coefficients satisfy certain restrictions

5. Testing the stability of the estimated regression model over time or in different cross-sectional
units

6. Testing the functional form of regression models

Since testing of one or more of these types occurs so commonly in empirical analysis, we devote
a section to each type.

Hypothesis Testing about Individual Regression Coefficients

Hypothesis Testing for Individual Regression Coefficients is a key aspect of multiple


regression analysis. It focuses on determining whether a specific predictor has a statistically
significant relationship with the dependent variable. Here's a detailed overview, often aligned
with insights from Damodar Gujarati's econometrics perspective:

Purpose of Testing Individual Coefficients


The aim is to test whether an independent variable significantly contributes to explaining the
variation in the dependent variable, given the other variables in the model.

Hypotheses

For any individual regression coefficient βj:

 Null Hypothesis (H0): βj=0


o This means the predictor Xj has no effect on the dependent variable Y, holding all
other variables constant.
 Alternative Hypothesis (H1): βj≠0
o This indicates that Xj does have a significant effect on Y.

Test Statistic
The t-test is used to evaluate the significance of individual coefficients. The test statistic is
calculated as:
Testing the Overall Significance of the Sample Regression
Testing the overall significance of the regression model is an essential aspect of hypothesis
testing in multiple regression. It assesses whether the independent variables, taken together,
significantly explain the variability in the dependent variable. The test typically involves the F-
test, which evaluates the null hypothesis that none of the independent variables have a
significant relationship with the dependent variable.

F-Test Statistic
The F-test is used to test the overall significance of the regression model. The formula for the F-
statistiic is:

Where:

 RSS is the regression sum of squares (the variation explained by the model),
 ESS is the error sum of squares (the variation unexplained by the model),
 k is the number of predictors (excluding the intercept),
 n is the total number of observations.
This statistic follows an F-distribution with k and n−k−1 degrees of freedom.

Decision Rule

 Compute the F-statistic based on the formula.


 Compare the calculated F-statistic with the critical value from the F-distribution table at
a chosen significance level (α\alpha), often set at 0.05.
o If the calculated F is greater than the critical F, reject the null hypothesis.
o If the calculated F is smaller than the critical F, fail to reject the null hypothesis.

Alternatively, you can use the p-value associated with the F-statistic:

 If the p-value is less than or equal to the chosen significance level (α\alpha), reject the
null hypothesis.
 If the p-value is greater than α\alpha, fail to reject the null hypothesis.

Example Calculation (Hypothetical Scenario)

An Important Relationship between R square and F

The relationship between R square (the coefficient of determination) and the F-statistic in the
context of multiple regression is an important concept in hypothesis testing and model
evaluation. Both of these measures help to assess how well the model explains the variation in
the dependent variable, but they do so in different ways.
Understanding R square

R2 is a measure that represents the proportion of the total variation in the dependent variable that
is explained by the independent variables in the model.

Where:

 SSR is the regression sum of squares (variation explained by the model),


 TSS is the total sum of squares (total variation in the dependent variable).

R2 ranges from 0 to 1

 R square=0 means the model does not explain any of the variability in the dependent
variable.
 R square=1 means the model explains all the variability in the dependent variable.

Adjusted R square accounts for the number of predictors in the model and is useful to assess
model fit when comparing models with different numbers of predictors.

The F-statistic compares the model with the independent variables to a model with no
independent variables (i.e., only the intercept).

 A large F-statistic indicates that at least one of the independent variables significantly
explains the variability in the dependent variable.

Relationship Between R square and the F-Statistic


The key relationship between R square and the F-statistic is that the F-statistic is related to the
significance of R square in explaining the variation in the dependent variable.

More specifically:

R square and F-statistic are related as follows:


This formula shows that:

 The larger the R square, the larger the F-statistic will be, suggesting that the model
explains a greater proportion of the variation in the dependent variable.
 If is R square close to 0 (i.e., the model does not explain much variation), the F-statistic
will be small, suggesting the model is not significant.
 If R square is close to 1 (i.e., the model explains nearly all the variation), the F-statistic
will be large, suggesting the model is significant.

Example

This large F-statistic suggests that the model is likely to be statistically significant, meaning that
at least one of the predictors is significantly related to the dependent variable.
The “Incremental” or “Marginal” Contribution of an Explanatory
Variable
The incremental (or marginal) contribution of an explanatory variable in a regression model
refers to how much that variable uniquely explains the variation in the dependent variable,
beyond what is already explained by other variables in the model. In other words, it measures the
added value of including the variable in the model.

In the context of multiple regression, when you include a new explanatory variable (predictor)
into a model, you are testing its ability to explain the variance in the dependent variable after
considering the effect of other predictors. The incremental contribution refers to how much
additional variation in the dependent variable can be explained when this new variable is added
to the model.

Example: Predicting Stock Returns

Suppose you are building a regression model to predict stock returns (y) using financial
indicators. Initially, you use Market Return (x1) as the only explanatory variable. Now, you
want to test whether adding Price-to-Earnings Ratio (P/E Ratio) (x2) improves the model.

Process

1. Old Model (Without P/E Ratio):

Start with a regression model that includes only Market Return:

Stock Return = β0+β1(Market Return)

Old 𝑅 2 =0.65
(The model explains 65% of the variation in stock returns.)

2. New Model (With P/E Ratio):

Now, you add the P/E Ratio as an additional variable:


Stock Return=β0+β1(Market Return) +β2(P/E Ratio) + ϵ

New,𝑅 2 =0.80
(The model explains 80% of the variation in stock returns.)

3. Incremental Contribution (Change in 𝑅 2 ):

The marginal contribution of P/E Ratio is measured by the increase in 𝑅 2

Δ 𝑅 2 = 𝑅 2 𝑛𝑒𝑤 − 𝑅 𝑂𝑙𝑑 = .80 − 065 = 0.15

This means the P/E Ratio explains an additional 15% of the variation in stock returns.

Formally, the incremental contribution of an explanatory variable is quantified by looking at


changes in key model metrics, particularly the R² (coefficient of determination) or Adjusted R²,
which measure the proportion of variance explained by the model.

Testing the Equality of Two Regression Coefficients

Testing the equality of two regression coefficients is an important concept in hypothesis testing,
especially when you are interested in determining whether two predictors in a regression model
have the same effect on the dependent variable. This test is often referred to as a test for the
equality of coefficients or a contrast test. It allows you to test the hypothesis that two
coefficients (β1 and β2) in a multiple regression model are equal, which can provide insight into
the relative importance or effect of two predictors.

You might also like