0% found this document useful (0 votes)
18 views67 pages

Chapter 15

The document discusses multiple regression analysis. It defines the multiple regression model and estimated regression equation. It describes using the least squares method to estimate the coefficients in the regression equation from sample data. An example is provided of using years of experience and aptitude test scores to predict programmer salaries. The process of inputting sample data into a computer package to solve for estimates of the coefficients is shown.

Uploaded by

hoang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views67 pages

Chapter 15

The document discusses multiple regression analysis. It defines the multiple regression model and estimated regression equation. It describes using the least squares method to estimate the coefficients in the regression equation from sample data. An example is provided of using years of experience and aptitude test scores to predict programmer salaries. The process of inputting sample data into a computer package to solve for estimates of the coefficients is shown.

Uploaded by

hoang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 67

Statistics for Business

and Economics
Anderson Sweeney Williams
Slides by
John Loucks
St. Edward’s University

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
1
or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 15
Multiple Regression
 Multiple Regression Model
 Least Squares Method
 Multiple Coefficient of Determination
 Model Assumptions
 Testing for Significance
 Using the Estimated Regression Equation
for Estimation and Prediction
 Categorical Independent Variables
 Residual Analysis
 Logistic Regression

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
2
or duplicated, or posted to a publicly accessible website, in whole or in part.
Multiple Regression Model

 Multiple Regression Model


The equation that describes how the dependent
variable y is related to the independent variables
x1, x2, . . . xp and an error term is:

y = b0 + b1x1 + b2x2 + . . . + bpxp + e

where:
b0, b1, b2, . . . , bp are the parameters, and
e is a random variable called the error term

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
3
or duplicated, or posted to a publicly accessible website, in whole or in part.
Multiple Regression Equation

 Multiple Regression Equation


The equation that describes how the mean
value of y is related to x1, x2, . . . xp is:

E(y) = 0 + 1x1 + 2x2 + . . . + pxp

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
4
or duplicated, or posted to a publicly accessible website, in whole or in part.
Estimated Multiple Regression Equation

 Estimated Multiple Regression Equation

^
y = b0 + b1x1 + b2x2 + . . . + bpxp

A simple random sample is used to compute sample


statistics b0, b1, b2, . . . , bp that are used as the point
estimators of the parameters b0, b1, b2, . . . , bp.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
5
or duplicated, or posted to a publicly accessible website, in whole or in part.
Estimation Process

Multiple Regression Model


Sample Data:
E(y) = 0 + 1x1 + 2x2 +. . .+ pxp + e
x1 x 2 . . . x p y
Multiple Regression Equation
. . . .
E(y) = 0 + 1x1 + 2x2 +. . .+ pxp . . . .
Unknown parameters are
b 0, b 1, b 2, . . . , b p

Estimated Multiple
b 0, b 1, b 2, . . . , b p Regression Equation
provide estimates of yˆ  b0  b1 x1  b2 x2  ...  bp x p
b 0, b 1, b 2, . . . , b p Sample statistics are
b 0, b 1, b 2, . . . , b p
© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
6
or duplicated, or posted to a publicly accessible website, in whole or in part.
Least Squares Method

 Least Squares Criterion

min  ( yi  yˆ i )2

 Computation of Coefficient Values


The formulas for the regression coefficients
b0, b1, b2, . . . bp involve the use of matrix algebra.
We will rely on computer software packages to
perform the calculations.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
7
or duplicated, or posted to a publicly accessible website, in whole or in part.
Multiple Regression Model

 Example: Programmer Salary Survey


A software firm collected data for a sample of 20
computer programmers. A suggestion was made that
regression analysis could be used to determine if
salary was related to the years of experience and the
score on the firm’s programmer aptitude test.
The years of experience, score on the aptitude test
test, and corresponding annual salary ($1000s) for a
sample of 20 programmers is shown on the next slide.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
8
or duplicated, or posted to a publicly accessible website, in whole or in part.
Multiple Regression Model

Exper. Test Salary Exper. Test Salary


(Yrs.) Score ($000s) (Yrs.) Score ($000s)
4 78 24.0 9 88 38.0
7 100 43.0 2 73 26.6
1 86 23.7 10 75 36.2
5 82 34.3 5 81 31.6
8 86 35.8 6 74 29.0
10 84 38.0 8 87 34.0
0 75 22.2 4 79 30.1
1 80 23.1 6 94 33.9
6 83 30.0 3 70 28.2
6 91 33.0 3 89 30.0
© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
9
or duplicated, or posted to a publicly accessible website, in whole or in part.
Multiple Regression Model

Suppose we believe that salary (y) is related to


the years of experience (x1) and the score on the
programmer aptitude test (x2) by the following
regression model:

y = 0 + 1x1 + 2x2 + 

where
y = annual salary ($000)
x1 = years of experience
x2 = score on programmer aptitude test

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
10
or duplicated, or posted to a publicly accessible website, in whole or in part.
Solving for the Estimates of 0, 1, 2

Least Squares
Input Data Output
x1 x2 y Computer b0 =
Package b1 =
4 78 24 for Solving
7 100 43 b2 =
Multiple
. . .
Regression R2 =
. . .
3 89 30 Problems etc.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
11
or duplicated, or posted to a publicly accessible website, in whole or in part.
Solving for the Estimates of 0, 1, 2

 Excel’s Regression Equation Output


A B C D E
38
39 Coeffic. Std. Err. t Stat P-value
40 Intercept 3.17394 6.15607 0.5156 0.61279
41 Experience 1.4039 0.19857 7.0702 1.9E-06
42 Test Score 0.25089 0.07735 3.2433 0.00478
43
Note: Columns F-I are not shown.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
12
or duplicated, or posted to a publicly accessible website, in whole or in part.
Estimated Regression Equation

SALARY
SALARY == 3.174
3.174 ++ 1.404(EXPER)
1.404(EXPER) ++ 0.251(SCORE)
0.251(SCORE)

Note: Predicted salary will be in thousands of dollars.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
13
or duplicated, or posted to a publicly accessible website, in whole or in part.
Interpreting the Coefficients

In multiple regression analysis, we interpret each


regression coefficient as follows:

bi represents an estimate of the change in y


corresponding to a 1-unit increase in xi when all
other independent variables are held constant.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
14
or duplicated, or posted to a publicly accessible website, in whole or in part.
Interpreting the Coefficients

bb11 == 1.404
1.404

Salary is expected to increase by $1,404 for


each additional year of experience (when the variable
score on programmer attitude test is held constant).

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
15
or duplicated, or posted to a publicly accessible website, in whole or in part.
Interpreting the Coefficients

bb22 == 0.251
0.251

Salary is expected to increase by $251 for each


additional point scored on the programmer aptitude
test (when the variable years of experience is held
constant).

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
16
or duplicated, or posted to a publicly accessible website, in whole or in part.
Multiple Coefficient of Determination

 Relationship Among SST, SSR, SSE

SST = SSR + SSE

 i
( y  y ) 2
= i
( ˆ
y  y ) 2
+  i i
( y  ˆ
y ) 2

where:
SST = total sum of squares
SSR = sum of squares due to regression
SSE = sum of squares due to error

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
17
or duplicated, or posted to a publicly accessible website, in whole or in part.
Multiple Coefficient of Determination

 Excel’s ANOVA Output


A B C D E F
32
33 ANOVA
34 df SS MS F Significance F
35 Regression 2 500.3285 250.1643 42.76013 2.32774E-07
36 Residual 17 99.45697 5.85041
37 Total 19 599.7855
38
SSR
SST

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
18
or duplicated, or posted to a publicly accessible website, in whole or in part.
Multiple Coefficient of Determination

R2 = SSR/SST

R2 = 500.3285/599.7855 = .83418

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
19
or duplicated, or posted to a publicly accessible website, in whole or in part.
Adjusted Multiple Coefficient
of Determination

n1
Ra2  1  (1  R ) 2
np1

20  1
R  1  (1  .834179)
2
a  .814671
20  2  1

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
20
or duplicated, or posted to a publicly accessible website, in whole or in part.
Assumptions About the Error Term 

The error  is
The error is aa random
random variable
variable with
with mean
mean of
of zero.
zero.

The
The variance
variance ofof  ,, denoted by 
denoted by 22,, is
is the
the same
same for
for all
all
values
values of
of the
the independent
independent variables.
variables.

The
The values of  are
values of are independent.
independent.

The error  is
The error is aa normally
normally distributed
distributed random random variable variable
reflecting
reflecting the
the deviation
deviation between
between the the yy value
value and and the the
expected
expected value
value of of yy given by 00 ++ 11xx11++ 22xx22++ .. .. ++ ppxxpp..
given by

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
21
or duplicated, or posted to a publicly accessible website, in whole or in part.
Testing for Significance

In
In simple
simple linear
linear regression,
regression, the
the FF and
and tt tests
tests provide
provide
the
the same
same conclusion.
conclusion.

In
In multiple
multiple regression,
regression, the
the FF and
and tt tests
tests have
have different
different
purposes.
purposes.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
22
or duplicated, or posted to a publicly accessible website, in whole or in part.
Testing for Significance: F Test

The
The FF test
test is
is used
used toto determine
determine whether
whether aa significant
significant
relationship
relationship exists
exists between
between the
the dependent
dependent variable
variable
and
and the
the set
set of
of all
all the
the independent
independent variables.
variables.

The
The FF test
test is
is referred
referred to
to as
as the
the test
test for
for overall
overall
significance.
significance.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
23
or duplicated, or posted to a publicly accessible website, in whole or in part.
Testing for Significance: t Test

If
If the
the FF test
test shows
shows an
an overall
overall significance,
significance, the
the tt test
test is
is
used
used toto determine
determine whether
whether each
each of
of the
the individual
individual
independent
independent variables
variables is
is significant.
significant.

A
A separate
separate tt test
test is
is conducted
conducted for
for each
each of
of the
the
independent
independent variables
variables in
in the
the model.
model.

We
We refer
refer to
to each
each of
of these
these tt tests
tests as
as aa test
test for
for individual
individual
significance.
significance.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
24
or duplicated, or posted to a publicly accessible website, in whole or in part.
Testing for Significance: F Test

Hypotheses H 0:  1 =  2 = . . . =  p = 0
Ha: One or more of the parameters
is not equal to zero.

Test Statistics F = MSR/MSE

Rejection Rule Reject H0 if p-value < a or if F > F ,


where F is based on an F distribution
with p d.f. in the numerator and
n - p - 1 d.f. in the denominator.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
25
or duplicated, or posted to a publicly accessible website, in whole or in part.
F Test for Overall Significance

Hypotheses H 0:  1 =  2 = 0
Ha: One or both of the parameters
is not equal to zero.

Rejection Rule For  = .05 and d.f. = 2, 17; F.05 = 3.59


Reject H0 if p-value < .05 or F > 3.59

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
26
or duplicated, or posted to a publicly accessible website, in whole or in part.
F Test for Overall Significance

 Excel’s ANOVA Output


A B C D E F
32
33 ANOVA
34 df SS MS F Significance F
35 Regression 2 500.3285 250.1643 42.76013 2.32774E-07
36 Residual 17 99.45697 5.85041
37 Total 19 599.7855
38
p-value used to test for
overall significance

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
27
or duplicated, or posted to a publicly accessible website, in whole or in part.
F Test for Overall Significance

Test Statistics F = MSR/MSE


= 250.16/5.85 = 42.76

Conclusion p-value < .05, so we can reject H0.


(Also, F = 42.76 > 3.59)

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
28
or duplicated, or posted to a publicly accessible website, in whole or in part.
Testing for Significance: t Test

Hypotheses H0 : i  0
H a : i  0

bi
Test Statistics t
sbi

Rejection Rule Reject H0 if p-value < a or


if t < -tor t > t where t
is based on a t distribution
with n - p - 1 degrees of freedom.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
29
or duplicated, or posted to a publicly accessible website, in whole or in part.
t Test for Significance
of Individual Parameters
Hypotheses H0 : i  0
H a : i  0

Rejection Rule For  = .05 and d.f. = 17, t.025 = 2.11


Reject H0 if p-value < .05, or
if t < -2.11 or t > 2.11

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
30
or duplicated, or posted to a publicly accessible website, in whole or in part.
t Test for Significance
of Individual Parameters
 Excel’s Regression Equation Output
A B C D E
38
39 Coeffic. Std. Err. t Stat P-value
40 Intercept 3.17394 6.15607 0.5156 0.61279
41 Experience 1.4039 0.19857 7.0702 1.9E-06
42 Test Score 0.25089 0.07735 3.2433 0.00478
43
Note: Columns F-I are not shown.

t statistic and p-value used to test for the


individual significance of “Experience”

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
31
or duplicated, or posted to a publicly accessible website, in whole or in part.
t Test for Significance
of Individual Parameters
 Excel’s Regression Equation Output
A B C D E
38
39 Coeffic. Std. Err. t Stat P-value
40 Intercept 3.17394 6.15607 0.5156 0.61279
41 Experience 1.4039 0.19857 7.0702 1.9E-06
42 Test Score 0.25089 0.07735 3.2433 0.00478
43
Note: Columns F-I are not shown.

t statistic and p-value used to test for the


individual significance of “Test Score”

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
32
or duplicated, or posted to a publicly accessible website, in whole or in part.
t Test for Significance
of Individual Parameters
Test Statistics b1 1. 4039
  7 . 07
sb1 . 1986
b2 . 25089
  3. 24
sb2 . 07735

Conclusions Reject both H0: 1 = 0 and H0: 2 = 0.


Both independent variables are
significant.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
33
or duplicated, or posted to a publicly accessible website, in whole or in part.
Testing for Significance: Multicollinearity

The
The term
term multicollinearity
multicollinearity refers
refers to
to the
the correlation
correlation
among
among the
the independent
independent variables.
variables.

When
When the
the independent
independent variables
variables are
are highly
highly correlated
correlated
(say,
(say, |r
|r || >> .7),
.7), itit is
is not
not possible
possible to
to determine
determine the
the
separate
separate effect
effect ofof anyany particular
particular independent
independent variable
variable
on
on the
the dependent
dependent variable.variable.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
34
or duplicated, or posted to a publicly accessible website, in whole or in part.
Testing for Significance: Multicollinearity

If
If the
the estimated
estimated regression
regression equation
equation is
is to
to be
be used
used only
only
for
for predictive
predictive purposes,
purposes, multicollinearity
multicollinearity is
is usually
usually
not
not aa serious
serious problem.
problem.

Every
Every attempt
attempt should
should be
be made
made to
to avoid
avoid including
including
independent
independent variables
variables that
that are
are highly
highly correlated.
correlated.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
35
or duplicated, or posted to a publicly accessible website, in whole or in part.
Using the Estimated Regression Equation
for Estimation and Prediction

The
The procedures
procedures for
for estimating
estimating the
the mean
mean value
value of
of yy
and
and predicting
predicting an
an individual
individual value
value of
of yy in
in multiple
multiple
regression
regression are
are similar
similar to
to those
those in
in simple
simple regression.
regression.

We
We substitute
substitute the
the given
given values
values ofof xx11,, xx22,, .. .. .. ,, xxpp into
into
the
the estimated
estimated regression
regression equation
equation and and use use the the
corresponding
corresponding value
value of
of yy as
as the
the point
point estimate.
estimate.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
36
or duplicated, or posted to a publicly accessible website, in whole or in part.
Using the Estimated Regression Equation
for Estimation and Prediction

The
The formulas
formulas required
required toto develop
develop interval
interval estimates
estimates
for ^
for the
the mean
mean value
value of
of yy and
and for
for an
an individual
individual value
value
of
of yy are
are beyond
beyond the
the scope
scope ofof the
the textbook.
textbook.

Software
Software packages
packages for
for multiple
multiple regression
regression will
will often
often
provide
provide these
these interval
interval estimates.
estimates.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
37
or duplicated, or posted to a publicly accessible website, in whole or in part.
Categorical Independent Variables

In
In many
many situations
situations we
we must
must work
work with
with categorical
categorical
independent
independent variables
variables such
such as
as gender
gender (male,
(male, female),
female),
method
method of
of payment
payment (cash,
(cash, check,
check, credit
credit card),
card), etc.
etc.

For
For example,
example, xx22 might
might represent
represent gender
gender where
where xx22 == 00
indicates
indicates male
male and
and xx22 == 11 indicates
indicates female.
female.

In
In this
this case,
case, xx22 is
is called
called aa dummy
dummy or
or indicator
indicator variable.
variable.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
38
or duplicated, or posted to a publicly accessible website, in whole or in part.
Categorical Independent Variables

 Example: Programmer Salary Survey


As an extension of the problem involving the
computer programmer salary survey, suppose that
management also believes that the annual salary is
related to whether the individual has a graduate
degree in computer science or information systems.
The years of experience, the score on the
programmer aptitude test, whether the individual has
a relevant graduate degree, and the annual salary
($000) for each of the sampled 20 programmers are
shown on the next slide.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
39
or duplicated, or posted to a publicly accessible website, in whole or in part.
Categorical Independent Variables

Exper. Test Salary Exper. Test Salary


(Yrs.) Score Degr. ($000s) (Yrs.) Score Degr. ($000s)
4 78 No 24.0 9 88 Yes 38.0
7 100 Yes 43.0 2 73 No 26.6
1 86 No 23.7 10 75 Yes 36.2
5 82 Yes 34.3 5 81 No 31.6
8 86 Yes 35.8 6 74 No 29.0
10 84 Yes 38.0 8 87 Yes 34.0
0 75 No 22.2 4 79 No 30.1
1 80 No 23.1 6 94 Yes 33.9
6 83 No 30.0 3 70 No 28.2
6 91 Yes 33.0 3 89 No 30.0
© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
40
or duplicated, or posted to a publicly accessible website, in whole or in part.
Estimated Regression Equation

^
y = b0 + b1x1 + b2x2 + b3x3

where:
y^ = annual salary ($1000)
x1 = years of experience
x2 = score on programmer aptitude test
x3 = 0 if individual does not have a graduate degree
1 if individual does have a graduate degree

x3 is a dummy variable

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
41
or duplicated, or posted to a publicly accessible website, in whole or in part.
Categorical Independent Variables

 Excel’s Regression Statistics


A B C
23
24 SUMMARY OUTPUT
25
26 Regression Statistics
27 Multiple R 0.920215239
28 R Square 0.846796085
29 Adjusted R Square 0.818070351
30 Standard Error 2.396475101
31 Observations 20
32

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
42
or duplicated, or posted to a publicly accessible website, in whole or in part.
Categorical Independent Variables

 Excel’s ANOVA Output


A B C D E F
32
33 ANOVA
34 df SS MS F Significance F
35 Regression 3 507.896 169.2987 29.47866 9.41675E-07
36 Residual 16 91.88949 5.743093
37 Total 19 599.7855
38

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
43
or duplicated, or posted to a publicly accessible website, in whole or in part.
Categorical Independent Variables

 Excel’s Regression Equation Output


A B C D E
38
39 Coeffic. Std. Err. t Stat P-value
40 Intercept 7.94485 7.3808 1.0764 0.2977
41 Experience 1.14758 0.2976 3.8561 0.0014
42 Test Score 0.19694 0.0899 2.1905 0.04364
43 Grad. Degr. 2.28042 1.98661 1.1479 0.26789
44
Note: Columns F-I are not shown.

Not significant

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
44
or duplicated, or posted to a publicly accessible website, in whole or in part.
Categorical Independent Variables

 Excel’s Regression Equation Output


A B F G H I
38
39 Coeffic. Low. 95% Up. 95% Low. 95.0% Up. 95.0%
40 Intercept 7.94485 -7.701739 23.5914 -7.7017385 23.591436
41 Experience 1.14758 0.516695 1.77847 0.51669483 1.7784686
42 Test Score 0.19694 0.00635 0.38752 0.00634964 0.3875243
43 Grad. Degr. 2.28042 -1.931002 6.49185 -1.9310017 6.4918494
44
Note: Columns C-E are hidden.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
45
or duplicated, or posted to a publicly accessible website, in whole or in part.
More Complex Categorical Variables

If
If aa categorical
categorical variable
variable has
has kk levels,
levels, kk -- 11 dummy
dummy
variables
variables are
are required,
required, with
with each
each dummy
dummy variablevariable
being
being coded
coded as
as 00 or
or 1.
1.

For
For example,
example, aa variable
variable with
with levels
levels A,
A, B,
B, and
and CC could
could
be
be represented
represented byby xx11 and
and xx22 values
values of
of (0,
(0, 0)
0) for
for A,
A, (1,
(1, 0)
0)
for
for B,
B, and
and (0,1)
(0,1) for
for C.
C.

Care
Care must
must be
be taken
taken in
in defining
defining and
and interpreting
interpreting the
the
dummy
dummy variables.
variables.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
46
or duplicated, or posted to a publicly accessible website, in whole or in part.
More Complex Categorical Variables

For example, a variable indicating level of


education could be represented by x1 and x2 values
as follows:

Highest
Degree x1 x2
Bachelor’s 0 0
Master’s 1 0
Ph.D. 0 1

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
47
or duplicated, or posted to a publicly accessible website, in whole or in part.
Residual Analysis

 For simple linear regression the residual plot against


ŷ and the residual plot against x provide the same
information.
 In multiple regression analysis it is preferable to use
the residual plot against ŷ to determine if the model
assumptions are satisfied.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
48
or duplicated, or posted to a publicly accessible website, in whole or in part.
Standardized Residual Plot Against ŷ

 Standardized residuals are frequently used in


residual plots for purposes of:
• Identifying outliers (typically, standardized
residuals < -2 or > +2)
• Providing insight about the assumption that the
error term e has a normal distribution
 The computation of the standardized residuals in
multiple regression analysis is too complex to be
done by hand
 Excel’s Regression tool can be used

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
49
or duplicated, or posted to a publicly accessible website, in whole or in part.
Standardized Residual Plot Against ŷ

 Excel Value Worksheet


A B C D
28
29 RESIDUAL OUTPUT
30
31 Observation Predicted Y Residuals Standard Residuals
32 1 27.89626052 -3.89626052 -1.771706896
33 2 37.95204323 5.047956775 2.295406016
34 3 26.02901122 -2.32901122 -1.059047572
35 4 32.11201403 2.187985973 0.994920596
36 5 36.34250715 -0.54250715 -0.246688757
Note: Rows 37-51 are not shown.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
50
or duplicated, or posted to a publicly accessible website, in whole or in part.
Standardized Residual Plot Against ŷ

Outlier
Standardized Residual Plot
3

2
Residuals
Standard

0
0 10 20 30 40 50
-1

-2

-3
Predicted Salary
© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
51
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 In many ways logistic regression is like ordinary


regression. It requires a dependent variable, y, and
one or more independent variables.
 Logistic regression can be used to model situations in
which the dependent variable, y, may only assume
two discrete values, such as 0 and 1.
 The ordinary multiple regression model is not
applicable.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
52
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Logistic Regression Equation


The relationship between E(y) and x1, x2, . . . , xp is
better described by the following nonlinear equation.

 0   1x1   2 x2   p x p
e
E( y )   0   1x1   2 x2   p x p
1 e

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
53
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Interpretation of E(y) as a
Probability in Logistic Regression
If the two values of y are coded as 0 or 1, the value
of E(y) provides the probability that y = 1 given a
particular set of values for x1, x2, . . . , xp.

E( y )  estimate of P( y  1|x1 , x2 , , x p )

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
54
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Estimated Logistic Regression Equation

b0  b1x1  b2 x2  bp x p
e
yˆ  b0  b1x1  b2 x2  bp x p
1 e

A simple random sample is used to compute


sample statistics b0, b1, b2, . . . , bp that are used as the
point estimators of the parameters b0, b1, b2, . . . , bp.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
55
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Example: Simmons Stores


Simmons’ catalogs are expensive and Simmons
would like to send them to only those customers who
have the highest probability of making a $200 purchase
using the discount coupon included in the catalog.
Simmons’ management thinks that annual spending
at Simmons Stores and whether a customer has a
Simmons credit card are two variables that might be
helpful in predicting whether a customer who receives
the catalog will use the coupon to make a $200
purchase.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
56
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Example: Simmons Stores


Simmons conducted a study by sending out 100
catalogs, 50 to customers who have a Simmons credit
card and 50 to customers who do not have the card.
At the end of the test period, Simmons noted for each of
the 100 customers:
1) the amount the customer spent last year at Simmons,
2) whether the customer had a Simmons credit card, and
3) whether the customer made a $200 purchase.
A portion of the test data is shown on the next slide.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
57
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression
x1 x2 y
 Simmons Test Data (partial)
Annual Spending Simmons $200
Customer ($1000) Credit Card Purchase
1 2.291 1 0
2 3.215 1 0
3 2.135 1 0
4 3.924 0 0
5 2.528 1 0
6 2.473 0 1
7 2.384 0 0
8 7.076 0 0
9 1.182 1 1
10 3.345 0 0
© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
58
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Simmons Logistic Regression Table (using Minitab)

Odds 95% CI
Predictor Coef SE Coef Z p Ratio Lower Upper

Constant -2.1464 0.5772 -3.72 0.000


Spending 0.3416 0.1287 2.66 0.008 1.41 1.09 1.81
Card 1.0987 0.4447 2.47 0.013 3.00 1.25 7.17

Log-Likelihood = -60.487
Test that all slopes are zero: G = 13.628, DF = 2, P-Value = 0.001

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
59
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Simmons Estimated Logistic Regression Equation

2.1464  0.3416 x1  1.0987 x2


e
yˆ  2.1464  0.3416 x1  1.0987 x2
1 e

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
60
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Using the Estimated Logistic Regression Equation


• For customers that spend $2000 annually
and do not have a Simmons credit card:
e 2.14640.3416( 2 )1.0987(0)
yˆ  2.1464  0.3416( 2 ) 1.0987(0)
 0.1880
1 e
• For customers that spend $2000 annually
and do have a Simmons credit card:
e 2.14640.3416( 2 )1.0987(1)
yˆ  2.1464  0.3416( 2 ) 1.0987(1)
 0.4099
1 e

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
61
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Testing for Significance

Hypotheses H 0:  1 =  2 = 0
Ha: One or both of the parameters
is not equal to zero.
Test Statistics z = bi/sb
i

Rejection Rule Reject H0 if p-value < a

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
62
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Testing for Significance

Conclusions For independent variable x1:


z = 2.66 and the p-value = .008.
Hence, b1 = 0. In other words,
x1 is statistically significant.

For independent variable x2:


z = 2.47 and the p-value = .013.
Hence, b2 = 0. In other words,
x2 is also statistically significant.

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
63
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Odds in Favor of an Event Occurring

P( y  1|x1 , x2 , , x p ) P( y  1|x1 , x 2 , , x p )
odds  
P( y  0|x1 , x2 , , x p ) 1  P( y  1|x1 , x 2 , , x p )

 Odds Ratio

odds 1
Odds Ratio 
odds 0

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
64
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Estimated Probabilities

Annual Spending
$1000 $2000 $3000 $4000 $5000 $6000 $7000

Credit Yes 0.3305 0.4099 0.4943 0.5791 0.6594 0.7315 0.7931


Card No 0.1413 0.1880 0.2457 0.3144 0.3922 0.4759 0.5610

Computed
earlier

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
65
or duplicated, or posted to a publicly accessible website, in whole or in part.
Logistic Regression

 Comparing Odds
Suppose we want to compare the odds of making a
$200 purchase for customers who spend $2000 annually
and have a Simmons credit card to the odds of making a
$200 purchase for customers who spend $2000 annually
and do not have a Simmons credit card.
.4099
estimate of odds 1   .6946
1 - .4099
.1880
estimate of odds 0   .2315
1 - .1880
.6946
Estimate of odds ratio   3.00
.2315
© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
66
or duplicated, or posted to a publicly accessible website, in whole or in part.
End of Chapter 15

© 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied
Slide
67
or duplicated, or posted to a publicly accessible website, in whole or in part.

You might also like