Multiple Linear Regression: Chapter 12
Multiple Linear Regression: Chapter 12
Linear
13 Regression
Chapter 12
Multiple Regression Analysis
Definition
The multiple regression model equation is
2
Multiple Regression Analysis
The regression coefficient b1 is interpreted as the expected
change in Y associated with a 1-unit increase in x1 while
x2,..., xp are held fixed.
3
Easier Notation?
The multiple regression model can be written in matrix
form.
4
Estimating Parameters
To estimate the parameters b0, b1,..., bp using the principle
of least squares, form the sum of squared deviations of the
observed yj’s from the regression line:
& &
The least squares estimates are those values of the bis that
minimize the equation. You could do this by taking the partial
derivative w.r.t. to each parameter, and then solving the k+1
unknowns using the k+1 equations (akin to the simple regression
method).
6
Example
Suppose, for example, that y is the lifetime of a certain tool, and
that there are 3 brands of tool being investigated.
Let:
x1 = 1 if tool A is used, and 0 otherwise,
x2 = 1 if tool B is used, and 0 otherwise,
x3 = 1 if tool C is used, and 0 otherwise.
8
R2
Just as with simple regression, the error sum of squares is
SSE = S(yi – )2.
The number of df associated with SSE is n–(p+1) because
p+1 df are lost in estimating the p+1 b coefficients.
9
R2
Just as before, the total sum of squares is
11
R2
For example, suppose y is the sale price of a house. Then
sensible predictors include
x1 = the interior size of the house,
x2 = the size of the lot on which the house sits,
x3 = the number of bedrooms,
x4 = the number of bathrooms, and
x5 = the house’s age.
13
R2
Because the ratio in front of SSE/SST exceeds 1, is
smaller than R2. Furthermore, the larger the number of
predictors p relative to the sample size n, the smaller will
be relative to R2.
Adjusted R2 can even be negative, whereas R2 itself must
be between 0 and 1. A value of that is substantially
smaller than R2 itself is a warning that the model may
contain too many predictors.
14
^2
s
SSE is still the basis for estimating the remaining model
parameter:
"
&&'
!" =$% =$
( − (+ + 1)
15
Example
Investigators carried out a study to see how various
characteristics of concrete are influenced by
x1 = % limestone powder
x2 = water-cement ratio,
resulting in data published in “Durability of Concrete with
Addition of Limestone Powder,” Magazine of Concrete
Research, 1996: 131–137.
16
Example cont’d
18
Model Selection
Important Questions:
19
A Model Utility Test
The model utility test in simple linear regression involves
the null hypothesis H0: b1 = 0, according to which there is
no useful linear relation between y and the predictor x.
We could test each b separately, but that would take time
and be very conservative (if Bonferroni correction is used).
A better test is a joint test, and is based on a statistic
that has an F distribution when H0 is true.
20
A Model Utility Test
Null hypothesis: H0: b1 = b2 = … = bp = 0
Alternative hypothesis: Ha: at least one bi ≠ 0 (i = 1,..., p)
21
Example – Bond shear strength
The article “How to Optimize and Control the Wire Bonding
Process: Part II” (Solid State Technology, Jan 1991: 67-72)
described an experiment carried out to asses the impact of
force (gm), power (mW), temperature (C) and time (msec)
on ball bond shear strength (gm).
22
Example – Bond shear strength
The article “How to Optimize and Control the Wire Bonding Process:
Part II” (Solid State Technology, Jan 1991: 67-72) described an
experiment carried out to asses the impact of force (gm), power (mW),
temperature (C) and time (msec) on ball bond shear strength (gm).
The output for this model looks like this:
24
Example – Bond shear strength
A model with p = 4 predictors was fit, so the relevant
hypothesis to determine if our model is “okay” is
This does not mean that all four predictors are useful!
26
Inference for Single Parameters
All standard statistical software packages compute and
show the standard deviations of the regression coefficients.
This is the same thing we did for simple linear regression.
27
Inference for Single Parameters
Our output:
29
Inference for Parameter Subsets
In our output, we see that perhaps “force” and “time” can
be deleted from the model. We then have these results:
31
Inference for Parameter Subsets
The relevant hypothesis is then:
32
Inferences for Parameter Subsets
The test is carried out by fitting both the full and reduced
models.
Because the full model contains not only the predictors of
the reduced model but also some extra predictors, it should
fit the data at least as well as the reduced model.
That is, if we let SSEp be the sum of squared residuals for
the full model and SSEk be the corresponding sum for the
reduced model, then SSEp £ SSEk.
33
Inferences for Parameter Subsets
Intuitively, if SSEp is a great deal smaller than SSEk, the full
model provides a much better fit than the reduced model;;
the appropriate test statistic should then depend on the
reduction SSEk – SSEp in unexplained variation.
(%%&' − %%&) ) (+ − ,)
Test statistic value: ! =#
%%&) (- − + + 1 )
Rejection region: f ³ Fa,p–k,n – (p + 1)
34
Inferences for Parameter Subsets
Let’s do this for the bond strength example:
> anova(fitfull)
Analysis of Variance Table
Response: strength Df Sum Sq Mean Sq F value Pr(>F)
force 1 26.67 26.67 1.0012 0.326611
power 1 1342.51 1342.51 50.3967 1.931e-07 ***
temp 1 251.55 251.55 9.4431 0.005064 **
time 1 39.78 39.78 1.4934 0.233080
Residuals 25 665.97 26.64
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> anova(fitred)
Analysis of Variance Table
Response: strength Df Sum Sq Mean Sq F value Pr(>F)
power 1 1342.51 1342.51 49.4901 1.458e-07 ***
temp 1 251.55 251.55 9.2732 0.005142 **
Residuals 27 732.43 27.13
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 35
Inferences for Parameter Subsets
Let’s do this for the bond strength example:
36
Multicollinearity
What is multicollinearity?
37
Multicollinearity
Example: Clinicians observed the following measurements
for 20 subjects:
• Blood pressure (in mm Hg)
• Weight (in kg)
• Body surface area (in sq m)
• Stress index
The researchers were interested in determining if a
relationship exists between blood pressure and the other
covariates.
38
Multicollinearity
A scatterplot of the predictors looks like this:
39
Multicollinearity
And the correlation matrix looks like this:
40
Multicollinearity
A model summary (including all the predictors, with blood
pressure log-transformed) looks like this:
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.2131301 0.5098890 8.263 3.64e-07 ***
BSA 0.5846935 0.7372754 0.793 0.439
Stress -0.0004459 0.0035501 -0.126 0.902
Weight -0.0078813 0.0220714 -0.357 0.726
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.02105 on 16 degrees of freedom
Multiple R-squared: 0.8256, Adjusted R-squared: 0.7929
F-statistic: 25.25 on 3 and 16 DF, p-value: 2.624e-06
41
What is Multicollinearity?
The overall F-test has a p-value of 2.624e-06, indicating that we
should reject the null hypothesis that none of the variables in the model
are significant.
But none of the individual variables is significant. All p-values are
bigger than 0.43.
42
Multicollinearity
Multicollinearity is not an error – it comes from the lack of
information in the dataset.
43
Multicollinearity
What happens if we ignore multicollinearity problem?
If it is not “serious”, the only thing that happens is that our
confidence intervals are a bit bigger than what they would
be if all the variables are independent (i.e. all our tests will
be slightly more conservative, in favor of the null).
But if multicollinearity is serious and we ignore it, all
confidence intervals will be a lot bigger than what they
would be, the numerical estimation will be problematic, and
the estimated parameters will be all over the place.
This is how we get in this situation when the overall F-test
is significant, but none of the individual coefficients are.
44
Multicollinearity
When is multicollinearity serious and how do we detect
this?
• Plots and correlation tables show highly linear
relationships between predictors.
• A significant F-statistic for the overall test of the model
but no single (or very few single) predictors are
significant
• The estimated effect of a covariate may have an
opposite sign from what you (and everyone else) would
expect.
45
Reducing multicollinearity
STRATEGY 1: Omit redundant variables. (Drawbacks? Information
needed?)
STRATEGY 2: Center predictors at or near their mean before constructing
powers (square, etc) and interaction terms involving them.
STRATEGY 3: Study the principal components of the X matrix to discern
possible structural effects (outside of scope of this course).
STRATEGY 4: Get more data with X’s that lie in the areas about which the
current data are not informative (when possible).
46
Model Selection Methods
So far, we have discussed a number of methods for finding the
“best” model:
47
Model Selection Methods
So far, we have discussed a number of methods for finding the
“best” model:
48
Model Selection Methods
So far, we have discussed a number of methods for finding the
“best” model:
49