0% found this document useful (0 votes)
18 views35 pages

Chapter 14 MR

Uploaded by

shanjidakterimi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views35 pages

Chapter 14 MR

Uploaded by

shanjidakterimi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Chapter Goals

After completing this chapter, you should be


able to:
▪ apply multiple regression analysis to business
decision-making situations
▪ analyze and interpret the computer output for a
multiple regression model
▪ perform residual analysis for the multiple
regression model
▪ test the significance of the independent
variables in a multiple regression model
Chapter Goals
(continued)

After completing this chapter, you should be


able to:
▪ use a coefficient of partial determination to test
portions of the multiple regression model
▪ incorporate qualitative variables into the
regression model by using dummy variables
▪ use interaction terms in regression models
The Multiple Regression
Model
Idea: Examine the linear relationship between
1 dependent (Y) & 2 or more independent variables (Xi)

Multiple Regression Model with k Independent Variables:

Y-intercept Population slopes Random Error


Multiple Regression Equation

The coefficients of the multiple regression model are


estimated using sample data

Multiple regression equation with k independent variables:


Estimated Estimated
(or predicted) Estimated slope coefficients
value of Y intercept

In this chapter we will always use Excel to obtain the


regression slope coefficients and other regression
summary measures.
Chap 13-4
Example:
2 Independent Variables
▪ A distributor of frozen desert pies wants to
evaluate factors thought to influence demand
▪ Dependent variable: Pie sales (units per week)
▪ Independent variables: Price (in $)
Advertising ($100’s)
▪ Data are collected for 15 weeks
Pie Sales Example
Pie Price Advertising
Week Sales ($) ($100s) Multiple regression equation:
1 350 5.50 3.3
2 460 7.50 3.3
Sales = a + b1 (Price)
3 350 8.00 3.0
4 430 8.00 4.5 + b2 (Advertising)
5 350 6.80 3.0
6 380 7.50 4.0
7 430 4.50 3.0
8 470 6.40 3.7
9 450 7.00 3.5
10 490 5.00 4.0
11 340 7.20 3.5
12 300 7.90 3.2
13 440 5.90 4.0
14 450 5.00 3.5
15 300 7.00 2.7
Multiple Regression Output
Regression Statistics
Multiple R 0.72213
R Square 0.52148
Adjusted R Square 0.44172
Standard Error 47.46341
Observations 15

ANOVA df SS MS F Significance F
Regression 2 29460.027 14730.013 6.53861 0.01201
Residual 12 27033.306 2252.776
Total 14 56493.333

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404
Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392
Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888
The Multiple Regression Equation

where
Sales is in number of pies per week
Price is in $
Advertising is in $100’s.
b1 = -24.975: sales b2 = 74.131: sales will
will decrease, on increase, on average,
average, by 24.975 by 74.131 pies per
pies per week for week for each $100
each $1 increase in increase in
selling price, net of advertising, net of the
the effects of changes effects of changes
due to advertising due to price
Using The Equation to Make
Predictions
Predict sales for a week in which the selling
price is $5.50 and advertising is $350:

Note that Advertising is


Predicted sales in $100’s, so $350
means that X2 = 3.5
is 428.62 pies
Predictions in PHStat
▪ PHStat | regression | multiple regression …

Check the
“confidence and
prediction interval
estimates” box

Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 13-10
Predictions in PHStat
(continued)

Input values

<
Predicted Y value
Confidence interval for the

<
mean Y value, given
these X’s

Prediction interval for an

<
individual Y value, given
these X’s
Coefficient of
Multiple Determination
▪ Reports the proportion of total variation in Y explained by all X
variables taken together
Multiple Coefficient of
Determination
(continued)
Regression Statistics
Multiple R 0.72213
R Square 0.52148
Adjusted R Square 0.44172
Standard Error 47.46341
52.1% of the variation in pie sales
Observations 15 is explained by the variation in
price and advertising
ANOVA df SS MS F Significance F
Regression 2 29460.027 14730.013 6.53861 0.01201
Residual 12 27033.306 2252.776
Total 14 56493.333

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404
Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392
Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888
2
Adjusted r
▪ r2 never decreases when a new X variable is
added to the model
▪ This can be a disadvantage when comparing
models
▪ What is the net effect of adding a new variable?
▪ We lose a degree of freedom when a new X
variable is added
▪ Did the new X variable add enough
explanatory power to offset the loss of one
degree of freedom?
2
Adjusted r
(continued)
▪ Shows the proportion of variation in Y explained
by all X variables adjusted for the number of X
variables used

(where n = sample size, k = number of independent variables)

▪ Penalize excessive use of unimportant independent


variables
▪ Smaller than r2
▪ Useful in comparing among models
2
Adjusted r
(continued)
Regression Statistics
Multiple R 0.72213
R Square 0.52148
Adjusted R Square 0.44172
44.2% of the variation in pie sales is
Standard Error 47.46341
explained by the variation in price and
Observations 15 advertising, taking into account the sample
size and number of independent variables
ANOVA df SS MS F Significance F
Regression 2 29460.027 14730.013 6.53861 0.01201
Residual 12 27033.306 2252.776
Total 14 56493.333

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404
Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392
Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888
Residuals in Multiple Regression
Two variable model
Y Sample
Yi observation
Residual =

<
ei = (Yi – Yi)

<
Yi

x2i
X2

<
x1i The best fit equation, Y ,
is found by minimizing the
X1 sum of squared errors, Σe2
Multiple Regression Assumptions

Errors (residuals) from the regression model:

<
ei = (Yi – Yi)

Assumptions:
▪ The errors are normally distributed
▪ Errors have a constant variance
▪ The model errors are independent
Is the Model Significant?
▪ F-Test for Overall Significance of the Model
▪ Shows if there is a linear relationship between all
of the X variables considered together and Y
▪ Use F test statistic
▪ Hypotheses:
H0: β1 = β2 = … = βk = 0 (no linear relationship)
H1: at least one βi ≠ 0 (at least one independent
variable affects Y)
F-Test for Overall Significance
▪ Test statistic:

where F has (numerator) = k and


(denominator) = (n – k - 1)
degrees of freedom
F-Test for Overall Significance
(continued)
Regression Statistics
Multiple R 0.72213
R Square 0.52148
Adjusted R Square 0.44172
Standard Error 47.46341
With 2 and 12 degrees P-value for
Observations 15
of freedom the F-Test

ANOVA df SS MS F Significance F
Regression 2 29460.027 14730.013 6.53861 0.01201
Residual 12 27033.306 2252.776
Total 14 56493.333

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404
Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392
Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888
F-Test for Overall Significance
(continued)

H0: β1 = β2 = 0 Test Statistic:


H1: β1 and β2 not both zero
α = .05
df1= 2 df2 = 12
Decision:
Critical Since F test statistic is in
Value: the rejection region
Fα = 3.885 (p-value < .05), reject H0
α = .05
Conclusion:
0 F There is evidence that at least one
Do not Reject H0
reject H0 independent variable affects Y
F.05 = 3.885
Are Individual Variables
Significant?
▪ Use t-tests of individual variable slopes
▪ Shows if there is a linear relationship between the
variable Xi and Y
▪ Hypotheses:
▪ H0: βi = 0 (no linear relationship)
▪ H1: βi ≠ 0 (linear relationship does exist
between Xi and Y)
Are Individual Variables
Significant?
(continued)

H0: βi = 0 (no linear relationship)


H1: βi ≠ 0 (linear relationship does exist
between xi and y)

Test Statistic:

(df = n – k – 1)
Are Individual Variables
Significant?
(continued)
Regression Statistics
t-value for Price is t = -2.306, with
Multiple R 0.72213
p-value .0398
R Square 0.52148
Adjusted R Square 0.44172
Standard Error 47.46341 t-value for Advertising is t = 2.855,
Observations 15 with p-value .0145

ANOVA df SS MS F Significance F
Regression 2 29460.027 14730.013 6.53861 0.01201
Residual 12 27033.306 2252.776
Total 14 56493.333

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404
Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392
Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888
Inferences about the Slope:
t Test Example
From Excel output:
H 0: β i = 0
Coefficients Standard Error t Stat P-value
H 1: β i ≠ 0 Price -24.97509 10.83213 -2.30565 0.03979
Advertising 74.13096 25.96732 2.85478 0.01449
d.f. = 15-2-1 = 12
α = .05 The test statistic for each variable falls
tα/2 = 2.1788 in the rejection region (p-values < .05)
Decision:
α/2=.025 α/2=.025 Reject H0 for each variable
Conclusion:
There is evidence that both
Reject H0 Do not reject H0 Reject H0
-tα/2 tα/2 Price and Advertising affect
0
-2.1788 2.1788 pie sales at α = .05
Confidence Interval Estimate
for the Slope
Confidence interval for the population slope βi

where t has
(n – k – 1) d.f.

Coefficients Standard Error


Intercept 306.52619 114.25389 Here, t has
Price -24.97509 10.83213
(15 – 2 – 1) = 12 d.f.
Advertising 74.13096 25.96732

Example: Form a 95% confidence interval for the effect of


changes in price (X1) on pie sales:
-24.975 ± (2.1788)(10.832)
So the interval is (-48.576 , -1.374)
Confidence Interval Estimate
for the Slope
(continued)
Confidence interval for the population slope βi

Coefficients Standard Error … Lower 95% Upper 95%


Intercept 306.52619 114.25389 … 57.58835 555.46404
Price -24.97509 10.83213 … -48.57626 -1.37392
Advertising 74.13096 25.96732 … 17.55303 130.70888

Example: Excel output also reports these interval endpoints:


Weekly sales are estimated to be reduced by between 1.37 to
48.58 pies for each increase of $1 in the selling price
Using Dummy Variables

▪ A dummy variable is a categorical explanatory


variable with two levels:
▪ yes or no, on or off, male or female
▪ coded as 0 or 1
▪ Regression intercepts are different if the variable
is significant
▪ Assumes equal slopes for other variables
▪ If more than two levels, the number of dummy
variables needed is (number of levels - 1)
Dummy-Variable Example
(with 2 Levels)

Let:
Y = pie sales
X1 = price
X2 = holiday (X2 = 1 if a holiday occurred during the week)
(X2 = 0 if there was no holiday that week)
Dummy-Variable Example
(with 2 Levels)
(continued)

Holiday
No Holiday

Different Same
intercept slope
Y (sales)
If H0: β2 = 0 is
b0 + b 2 rejected, then
Holi
day
b0 (X = “Holiday” has a
No H 2 1)
olida significant effect
y (X on pie sales
2 = 0)

X1 (Price)
Interpreting the Dummy Variable
Coefficient (with 2 Levels)
Example:

Sales: number of pies sold per week


Price: pie price in $
1 If a holiday occurred during the week
Holiday:
0 If no holiday occurred

b2 = 15: on average, sales were 15 pies greater in


weeks with a holiday than in weeks without a
holiday, given the same price
Dummy-Variable Models
(more than 2 Levels)
▪ The number of dummy variables is one less
than the number of levels
▪ Example:
Y = house price ; X1 = square feet

▪ If style of the house is also thought to matter:


Style = ranch, split level, condo

Three levels, so two dummy


variables are needed
Dummy-Variable Models
(more than 2 Levels)
(continued)
▪ Example: Let “condo” be the default category, and let
X2 and X3 be used for the other two categories:

Y = house price
X1 = square feet
X2 = 1 if ranch, 0 otherwise
X3 = 1 if split level, 0 otherwise

The multiple regression equation is:


Interpreting the Dummy Variable
Coefficients (with 3 Levels)
Consider the regression equation:

For a condo: X2 = X3 = 0
With the same square feet, a
ranch will have an estimated
average price of 23.53
For a ranch: X2 = 1; X3 = 0 thousand dollars more than a
condo

With the same square feet, a


For a split level: X2 = 0; X3 = 1 split-level will have an
estimated average price of
18.84 thousand dollars more
than a condo.

You might also like