Regression Models: To Accompany
Regression Models: To Accompany
Chapter 4
To accompany
Quantitative Analysis for Management, Tenth Edition,
by Render, Stair, and Hanna
Power Point slides created by Jeff Heyl
Regression Models
2009 Prentice-Hall, Inc.
2009 Prentice-Hall, Inc. 4 2
Learning Objectives
1. Identify variables and use them in a regression
model
2. Develop simple linear regression equations
from sample data and interpret the slope and
intercept
3. Compute the coefficient of determination and
the coefficient of correlation and interpret their
meanings
4. Interpret the F-test in a linear regression model
5. List the assumptions used in regression and
use residual plots to identify problems
After completing this chapter, students will be able to:
2009 Prentice-Hall, Inc. 4 3
Learning Objectives
6. Develop a multiple regression model and use it
to predict
7. Use dummy variables to model categorical
data
8. Determine which variables should be included
in a multiple regression model
9. Transform a nonlinear function into a linear
one for use in regression
10. Understand and avoid common mistakes made
in the use of regression analysis
After completing this chapter, students will be able to:
2009 Prentice-Hall, Inc. 4 4
Chapter Outline
4.1 Introduction
4.2 Scatter Diagrams
4.3 Simple Linear Regression
4.4 Measuring the Fit of the Regression
Model
4.5 Using Computer Software for
Regression
4.6 Assumptions of the Regression
Model
2009 Prentice-Hall, Inc. 4 5
Chapter Outline
4.7 Testing the Model for Significance
4.8 Multiple Regression Analysis
4.9 Binary or Dummy Variables
4.10 Model Building
4.11 Nonlinear Regression
4.12 Cautions and Pitfalls in Regression
Analysis
2009 Prentice-Hall, Inc. 4 6
Introduction
Regression analysis is a very valuable
tool for a manager
Regression can be used to
Understand the relationship between
variables
Predict the value of one variable based on
another variable
Examples
Determining best location for a new store
Studying the effectiveness of advertising
dollars in increasing sales volume
2009 Prentice-Hall, Inc. 4 7
Introduction
The variable to be predicted is called the
dependent variable
Sometimes called the response variable
The value of this variable depends on
the value of the independent variable
Sometimes called the explanatory or
predictor variable
Independent
variable
Dependent
variable
Independent
variable
= +
2009 Prentice-Hall, Inc. 4 8
Scatter Diagram
Graphing is a helpful way to investigate
the relationship between variables
A scatter diagram or scatter plot is
often used
The independent variable is normally
plotted on the X axis
The dependent variable is normally
plotted on the Y axis
2009 Prentice-Hall, Inc. 4 9
Triple A Construction
Triple A Construction renovates old homes
They have found that the dollar volume of
renovation work is dependent on the area
payroll
TRIPLE AS SALES
($100,000s)
LOCAL PAYROLL
($100,000,000s)
6 3
8 4
9 6
5 4
4.5 2
9.5 5
Table 4.1
2009 Prentice-Hall, Inc. 4 10
Triple A Construction
Figure 4.1
12
10
8
6
4
2
0
S
a
l
e
s
(
$
1
0
0
,
0
0
0
)
Payroll ($100 million)
| | | | | | | |
0 1 2 3 4 5 6 7 8
2009 Prentice-Hall, Inc. 4 11
Simple Linear Regression
where
Y = dependent variable (response)
X = independent variable (predictor or explanatory)
|
0
= intercept (value of Y when X = 0)
|
1
= slope of the regression line
c = random error
Regression models are used to test if there is a
relationship between variables (predict sales
based on payroll)
There is some random error that cannot be
predicted
c | | + + = X Y
1 0
2009 Prentice-Hall, Inc. 4 12
Simple Linear Regression
True values for the slope and intercept are not
known so they are estimated using sample data
X b b Y
1 0
+ =
where
Y = dependent variable (response)
X = independent variable (predictor or explanatory)
b
0
= intercept (value of Y when X = 0)
b
1
= slope of the regression line
^
2009 Prentice-Hall, Inc. 4 13
Triple A Construction
Triple A Construction is trying to predict sales
based on area payroll
Y = Sales
X = Area payroll
The line chosen in Figure 4.1 is the one that
minimizes the errors
Error = (Actual value) (Predicted value)
Y Y e
=
2009 Prentice-Hall, Inc. 4 14
Least Squares Regression
Errors can be positive or negative so the average error could
be zero even though individual errors could be large.
Least squares regression minimizes the sum of the squared
errors.
Payroll Line Fit Plot
0
2
4
6
8
10
0 2 4 6 8
Payroll ($100.000,000's)
S
a
l
e
s
(
$
1
0
0
,
0
0
0
)
2009 Prentice-Hall, Inc. 4 15
Triple A Construction
For the simple linear regression model, the
values of the intercept and slope can be
calculated using the formulas below
X b b Y
1 0
+ =
=
2
1
) (
) )( (
X X
Y Y X X
b
X b Y b
1 0
=
2009 Prentice-Hall, Inc. 4 16
Triple A Construction
Y X (X X)
2
(X X)(Y Y)
6 3 (3 4)
2
= 1 (3 4)(6 7) = 1
8 4 (4 4)
2
= 0 (4 4)(8 7) = 0
9 6 (6 4)
2
= 4 (6 4)(9 7) = 4
5 4 (4 4)
2
= 0 (4 4)(5 7) = 0
4.5 2 (2 4)
2
= 4 (2 4)(4.5 7) = 5
9.5 5 (5 4)
2
= 1 (5 4)(9.5 7) = 2.5
Y = 42
Y = 42/6 = 7
X = 24
X = 24/6 = 4
(X X)
2
= 10 (X X)(Y Y) = 12.5
Table 4.2
Regression calculations
2009 Prentice-Hall, Inc. 4 17
Triple A Construction
4
6
24
6
= = =
X
X
7
6
42
6
= = =
Y
Y
25 1
10
5 12
2
1
.
.
) (
) )( (
= =
X X
Y Y X X
b
2 4 25 1 7
1 0
= = = ) )( . ( X b Y b
Regression calculations
X Y 25 1 2 .
+ = Therefore
2009 Prentice-Hall, Inc. 4 18
Triple A Construction
4
6
24
6
= = =
X
X
7
6
42
6
= = =
Y
Y
25 1
10
5 12
2
1
.
.
) (
) )( (
= =
X X
Y Y X X
b
2 4 25 1 7
1 0
= = = ) )( . ( X b Y b
Regression calculations
X Y 25 1 2 .
+ = Therefore
sales = 2 + 1.25(payroll)
If the payroll next
year is $600 million
000 950 $ or 5 9 6 25 1 2 , . ) ( .
= + = Y
2009 Prentice-Hall, Inc. 4 19
Measuring the Fit
of the Regression Model
Regression models can be developed
for any variables X and Y
How do we know the model is actually
helpful in predicting Y based on X?
We could just take the average error, but
the positive and negative errors would
cancel each other out
Three measures of variability are
SST Total variability about the mean
SSE Variability about the regression line
SSR Total variability that is explained by
the model
2009 Prentice-Hall, Inc. 4 20
Measuring the Fit
of the Regression Model
Sum of the squares total
2
) (
= Y Y SST
Sum of the squared error
= =
2 2
)
( Y Y e SSE
Sum of squares due to regression
=
2
)
( Y Y SSR
An important relationship
SSE SSR SST + =
2009 Prentice-Hall, Inc. 4 21
Measuring the Fit
of the Regression Model
Y X (Y Y)
2
Y (Y Y)
2
(Y Y)
2
6 3 (6 7)
2
= 1 2 + 1.25(3) = 5.75 0.0625 1.563
8 4 (8 7)
2
= 1 2 + 1.25(4) = 7.00 1 0
9 6 (9 7)
2
= 4 2 + 1.25(6) = 9.50 0.25 6.25
5 4 (5 7)
2
= 4 2 + 1.25(4) = 7.00 4 0
4.5 2 (4.5 7)
2
= 6.25 2 + 1.25(2) = 4.50 0 6.25
9.5 5 (9.5 7)
2
= 6.25 2 + 1.25(5) = 8.25 1.5625 1.563
(Y Y)
2
= 22.5 (Y Y)
2
= 6.875 (Y Y)
2
= 15.625
Y = 7 SST = 22.5 SSE = 6.875 SSR = 15.625
^
^ ^
^ ^
Table 4.3
2009 Prentice-Hall, Inc. 4 22
Sum of the squares total
2
) (
= Y Y SST
Sum of the squared error
= =
2 2
)
( Y Y e SSE
Sum of squares due to regression
=
2
)
( Y Y SSR
An important relationship
SSR explained variability
SSE unexplained variability
SSE SSR SST + =
Measuring the Fit
of the Regression Model
For Triple A Construction
SST = 22.5
SSE = 6.875
SSR = 15.625
2009 Prentice-Hall, Inc. 4 23
Measuring the Fit
of the Regression Model
Figure 4.2
12
10
8
6
4
2
0
S
a
l
e
s
(
$
1
0
0
,
0
0
0
)
Payroll ($100 million)
| | | | | | | |
0 1 2 3 4 5 6 7 8
Y = 2 + 1.25X
^
Y Y
Y Y
^
Y Y Y
^
2009 Prentice-Hall, Inc. 4 24
Coefficient of Determination
The proportion of the variability in Y explained by
regression equation is called the coefficient of
determination
The coefficient of determination is r
2
SST
SSE
SST
SSR
r = = 1
2
For Triple A Construction
6944 0
5 22
625 15
2
.
.
.
= = r
About 69% of the variability in Y is explained by
the equation based on payroll (X)
2009 Prentice-Hall, Inc. 4 25
Correlation Coefficient
The correlation coefficient is an expression of the
strength of the linear relationship
It will always be between +1 and 1
The correlation coefficient is r
2
r r =
For Triple A Construction
8333 0 6944 0 . . = = r
2009 Prentice-Hall, Inc. 4 26
Correlation Coefficient
*
*
*
*
(a) Perfect Positive
Correlation:
r = +1
X
Y
*
*
*
*
(c) No Correlation:
r = 0
X
Y
* *
*
*
*
*
*
* *
*
(d) Perfect Negative
Correlation:
r = 1
X
Y
*
*
*
*
*
*
*
*
*
(b) Positive
Correlation:
0 < r < 1
X
Y
*
*
*
*
*
*
*
Figure 4.3
2009 Prentice-Hall, Inc. 4 27
Using Computer Software
for Regression
Program 4.1A
2009 Prentice-Hall, Inc. 4 28
Using Computer Software
for Regression
Program 4.1B
2009 Prentice-Hall, Inc. 4 29
Using Computer Software
for Regression
Program 4.1C
2009 Prentice-Hall, Inc. 4 30
Using Computer Software
for Regression
Program 4.1D
2009 Prentice-Hall, Inc. 4 31
Using Computer Software
for Regression
Program 4.1D
Correlation coefficient is
called Multiple R in Excel
2009 Prentice-Hall, Inc. 4 32
Assumptions of the Regression Model
1. Errors are independent
2. Errors are normally distributed
3. Errors have a mean of zero
4. Errors have a constant variance
If we make certain assumptions about the errors
in a regression model, we can perform statistical
tests to determine if the model is useful
A plot of the residuals (errors) will often highlight
any glaring violations of the assumption
2009 Prentice-Hall, Inc. 4 33
Residual Plots
A random plot of residuals
Figure 4.4A
E
r
r
o
r
X
2009 Prentice-Hall, Inc. 4 34
Residual Plots
Nonconstant error variance
Errors increase as X increases, violating the
constant variance assumption
Figure 4.4B
E
r
r
o
r
X
2009 Prentice-Hall, Inc. 4 35
Residual Plots
Nonlinear relationship
Errors consistently increasing and then consistently
decreasing indicate that the model is not linear
Figure 4.4C
E
r
r
o
r
X
2009 Prentice-Hall, Inc. 4 36
Estimating the Variance
Errors are assumed to have a constant
variance (o
2
), but we usually dont know
this
It can be estimated using the mean
squared error (MSE), s
2
1
2
= =
k n
SSE
MSE s
where
n = number of observations in the sample
k = number of independent variables
2009 Prentice-Hall, Inc. 4 37
Estimating the Variance
For Triple A Construction
7188 1
4
8750 6
1 1 6
8750 6
1
2
.
. .
= =
=
= =
k n
SSE
MSE s
We can estimate the standard deviation, s
This is also called the standard error of the
estimate or the standard deviation of the
regression
31 1 7188 1 . . = = = MSE s
2009 Prentice-Hall, Inc. 4 38
Testing the Model for Significance
When the sample size is too small, you
can get good values for MSE and r
2
even if
there is no relationship between the
variables
Testing the model for significance helps
determine if the values are meaningful
We do this by performing a statistical
hypothesis test
2009 Prentice-Hall, Inc. 4 39
Testing the Model for Significance
We start with the general linear model
c | | + + = X Y
1 0
If |
1
= 0, the null hypothesis is that there is
no relationship between X and Y
The alternate hypothesis is that there is a
linear relationship (|
1
0)
If the null hypothesis can be rejected, we
have proven there is a relationship
We use the F statistic for this test
2009 Prentice-Hall, Inc. 4 40
Testing the Model for Significance
The F statistic is based on the MSE and MSR
k
SSR
MSR =
where
k = number of independent variables in the model
The F statistic is
MSE
MSR
F =
This describes an F distribution with
degrees of freedom for the numerator = df
1
= k
degrees of freedom for the denominator = df
2
= n k 1
2009 Prentice-Hall, Inc. 4 41
Testing the Model for Significance
If there is very little error, the MSE would
be small and the F-statistic would be large
indicating the model is useful
If the F-statistic is large, the significance
level (p-value) will be low, indicating it is
unlikely this would have occurred by
chance
So when the F-value is large, we can reject
the null hypothesis and accept that there is
a linear relationship between X and Y and
the values of the MSE and r
2
are
meaningful
2009 Prentice-Hall, Inc. 4 42
Steps in a Hypothesis Test
1. Specify null and alternative hypotheses
0
1 0
= | : H
0
1 1
= | : H
2. Select the level of significance (o). Common
values are 0.01 and 0.05
3. Calculate the value of the test statistic using the
formula
MSE
MSR
F =
2009 Prentice-Hall, Inc. 4 43
Steps in a Hypothesis Test
4. Make a decision using one of the following
methods
a) Reject the null hypothesis if the test statistic is
greater than the F-value from the table in Appendix D.
Otherwise, do not reject the null hypothesis:
2 1
if Reject
df df
calculated
F F
, ,
o
>
k df =
1
1
2
= k n df
b) Reject the null hypothesis if the observed significance
level, or p-value, is less than the level of significance
(o). Otherwise, do not reject the null hypothesis:
) ( statistic test calculated value - > = F P p
o < value - if Reject p
2009 Prentice-Hall, Inc. 4 44
Triple A Construction
Step 1.
H
0
: |
1
= 0 (no linear relationship between
X and Y)
H
1
: |
1
0 (linear relationship exists
between X and Y)
Step 2.
Select o = 0.05
6250 15
1
6250 15
.
.
= = =
k
SSR
MSR
09 9
7188 1
6250 15
.
.
.
= = =
MSE
MSR
F
Step 3.
Calculate the value of the test statistic
2009 Prentice-Hall, Inc. 4 45
Triple A Construction
Step 4.
Reject the null hypothesis if the test statistic
is greater than the F-value in Appendix D
df
1
= k = 1
df
2
= n k 1 = 6 1 1 = 4
The value of F associated with a 5% level of
significance and with degrees of freedom 1
and 4 is found in Appendix D
F
0.05,1,4
= 7.71
F
calculated
= 9.09
Reject H
0
because 9.09 > 7.71
2009 Prentice-Hall, Inc. 4 46
F = 7.71
0.05
9.09
Triple A Construction
Figure 4.5
We can conclude there is a
statistically significant
relationship between X and Y
The r
2
value of 0.69 means
about 69% of the variability
in sales (Y) is explained by
local payroll (X)
2009 Prentice-Hall, Inc. 4 47
r
2
coefficient of determination
The F-test determines whether or not there
is a relationship between the variables.
r
2
(coefficient of determination) is the best
measure of the strength of the prediction
relationship between the X and Y variables.
Values closer to 1 indicate a strong prediction
relationship.
Good regression models have a low
significance level for the F-test and high r
2
value.
2009 Prentice-Hall, Inc. 4 48
Coefficient Hypotheses
Statistical tests of significance can be performed
on the coefficients.
The null hypothesis is that the coefficient of X (i.e.,
the slope of the line) is 0 i.e., X is not useful in
predicting Y
P values are the observed significance level and
can be used to test the null hypothesis.
Values less than 5% negate the null hypothesis and
indicate that X is useful in predicting Y
For a simple linear regression, the test of the
regression coefficients gives the same information
as the F-test.
2009 Prentice-Hall, Inc. 4 49
Analysis of Variance (ANOVA) Table
When software is used to develop a regression
model, an ANOVA table is typically created that
shows the observed significance level (p-value)
for the calculated F value
This can be compared to the level of significance
(o) to make a decision
DF SS MS F SIGNIFICANCE
Regression k SSR MSR = SSR/k MSR/MSE P(F > MSR/MSE)
Residual n - k - 1 SSE MSE =
SSE/(n - k - 1)
Total n - 1 SST
Table 4.4
2009 Prentice-Hall, Inc. 4 50
ANOVA for Triple A Construction
Because this probability is less than 0.05, we
reject the null hypothesis of no linear relationship
and conclude there is a linear relationship
between X and Y
Program 4.1D
(partial)
P(F > 9.0909) = 0.0394
2009 Prentice-Hall, Inc. 4 51
Multiple Regression Analysis
Multiple regression models are
extensions to the simple linear model
and allow the creation of models with
several independent variables
Y = |
0
+ |
1
X
1
+ |
2
X
2
+ + |
k
X
k
+ c
where
Y = dependent variable (response variable)
X
i
= ith independent variable (predictor or explanatory
variable)
|
0
= intercept (value of Y when all X
i
= 0)
|
I
= coefficient of the ith independent variable
k = number of independent variables
c = random error
2009 Prentice-Hall, Inc. 4 52
Multiple Regression Analysis
To estimate these values, a sample is taken
the following equation developed
k k
X b X b X b b Y + + + + = ...
2 2 1 1 0
where
= predicted value of Y
b
0
= sample intercept (and is an estimate of |
0
)
b
i
= sample coefficient of the ith variable (and is
an estimate of |
i
)
Y
2 2 1 1 0
where
= predicted value of dependent variable (selling
price)
b
0
= Y intercept
X
1
and X
2
= value of the two independent variables (square
footage and age) respectively
b
1
and b
2
= slopes for X
1
and X
2
respectively
Y
+ + + =
2009 Prentice-Hall, Inc. 4 63
Model Building
The best model is a statistically significant
model with a high r
2
and few variables
As more variables are added to the model, the
r
2
-value usually increases
For this reason, the adjusted r
2
value is often
used to determine the usefulness of an
additional variable
The adjusted r
2
takes into account the number
of independent variables in the model
When variables are added to the model, the
value of r
2
can never decrease; however, the
adjusted r
2
may decrease.
2009 Prentice-Hall, Inc. 4 64
Model Building
SST
SSE
SST
SSR
= = 1
2
r
The formula for r
2
The formula for adjusted r
2
) /( SST
) /( SSE
1
1
1 Adjusted
2
=
n
k n
r
As the number of variables increases, the
adjusted r
2
gets smaller unless the increase due
to the new variable is large enough to offset the
change in k
2009 Prentice-Hall, Inc. 4 65
Model Building
It is tempting to keep adding variables to a model
to try to increase r
2
The adjusted r
2
will decrease if additional
independent variables are not beneficial.
As the number of variables (k) increases, n-k-1
decreases.
This causes SSE/(n-k-1) to increase which in turn
decreases the adjusted r
2
unless the extra variable
causes a significant decrease in SSE
The reduction in error (and SSE) must be sufficient
to offset the change in k
2009 Prentice-Hall, Inc. 4 66
Model Building
In general, if a new variable increases the adjusted
r
2
, it should probably be included in the model
In some cases, variables contain duplicate
information
When two independent variables are correlated,
they are said to be collinear (e.g., monthly salary
expenses and annual salary expenses)
When more than two independent variables are
correlated, multicollinearity exists
When multicollinearity is present, hypothesis
tests for the individual coefficients are not valid
but the model may still be useful
2009 Prentice-Hall, Inc. 4 67
Nonlinear Regression
In some situations, variables are not linear
Transformations may be used to turn a
nonlinear model into a linear model
*
* *
*
*
*
*
*
*
Linear relationship Nonlinear relationship
*
*
*
*
*
*
*
*
*
*
*
2009 Prentice-Hall, Inc. 4 68
Colonel Motors
The engineers want to use regression analysis to
improve fuel efficiency
They have been asked to study the impact of
weight on miles per gallon (MPG)
MPG
WEIGHT
(1,000 LBS.) MPG
WEIGHT
(1,000 LBS.)
12 4.58 20 3.18
13 4.66 23 2.68
15 4.02 24 2.65
18 2.53 33 1.70
19 3.09 36 1.95
19 3.11 42 1.92
Table 4.6
2009 Prentice-Hall, Inc. 4 69
Colonel Motors
Figure 4.6A
45
40
35
30
25
20
15
10
5
0
| | | | |
1.00 2.00 3.00 4.00 5.00
M
P
G
Weight (1,000 lb.)
Linear model
1 1 0
X b b Y + =
+ =
2009 Prentice-Hall, Inc. 4 74
Cautions and Pitfalls
If the assumptions are not met, the statistical
test may not be valid
Correlation does not necessarily mean
causation
Your annual salary and the price of cars may be
correlated but one does not cause the other
Multicollinearity makes interpreting
coefficients problematic, but the model may
still be good
Using a regression model beyond the range of
X is questionable, the relationship may not
hold outside the sample data
2009 Prentice-Hall, Inc. 4 75
Cautions and Pitfalls
t-tests for the intercept (b
0
) may be ignored
as this point is often outside the range of
the model
A linear relationship may not be the best
relationship, even if the F-test returns an
acceptable value
A nonlinear relationship can exist even if a
linear relationship does not
Just because a relationship is statistically
significant doesn't mean it has any
practical value