Multiple Regression
Multiple Regression
com
Chapter 305
Multiple Regression
Introduction
Multiple Regression Analysis refers to a set of techniques for studying the straight-line relationships among two or
more variables. Multiple regression estimates the β’s in the equation
y j = β 0 + β 1 x1 j + β 2 x 2 j + + β p x pj + ε j
The X’s are the independent variables (IV’s). Y is the dependent variable. The subscript j represents the
observation (row) number. The β’s are the unknown regression coefficients. Their estimates are represented by
b’s. Each β represents the original unknown (population) parameter, while b is an estimate of this β. The εj is the
error (residual) of observation j.
Although the regression problem may be solved by a number of techniques, the most-used method is least
squares. In least squares regression analysis, the b’s are selected so as to minimize the sum of the squared
residuals. This set of b’s is not necessarily the set you want, since they may be distorted by outliers--points that
are not representative of the data. Robust regression, an alternative to least squares, seeks to reduce the influence
of outliers.
Multiple regression analysis studies the relationship between a dependent (response) variable and p independent
variables (predictors, regressors, IV’s). The sample multiple regression equation is
yˆ j = b0 + b1 x1 j + b2 x2 j + ... + b p x pj
305-1
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Regression Models
In order to make good use of multiple regression, you must have a basic understanding of the regression model.
The basic regression model is
y = β 0 + β 1 x1 + β 2 x2 ++ β p x p + ε
This expression represents the relationship between the dependent variable (DV) and the independent variables
(IV’s) as a weighted average in which the regression coefficients (β’s) are the weights. Unlike the usual weights
in a weighted average, it is possible for the regression coefficients to be negative.
A fundamental assumption in this model is that the effect of each IV is additive. Now, no one really believes that
the true relationship is actually additive. Rather, they believe that this model is a reasonable first-approximation to
the true model. To add validity to this approximation, you might consider this additive model to be a Taylor-series
expansion of the true model. However, this appeal to the Taylor-series expansion usually ignores the ‘local-
neighborhood’ assumption.
Another assumption is that the relationship of the DV with each IV is linear (straight-line). Here again, no one
really believes that the relationship is a straight-line. However, this is a reasonable first approximation.
In order obtain better approximations, methods have been developed to allow regression models to approximate
curvilinear relationships as well as nonadditivity. Although nonlinear regression models can be used in these
situations, they add a higher level of complexity to the modeling process. An experienced user of multiple
regression knows how to include curvilinear components in a regression model when it is needed.
Another issue is how to add categorical variables into the model. Unlike regular numeric variables, categorical
variables may be alphabetic. Examples of categorical variables are gender, producer, and location. In order to
effectively use multiple regression, you must know how to include categorical IV’s in your regression model.
This section shows how NCSS may be used to specify and estimate advanced regression models that include
curvilinearity, interaction, and categorical variables.
Y = β 0 + β 1 X1 + β 2 X 2 + β 3 X12 + β 4 X 22 + β 5 X1 X 2
= β 0 + β 1 Z1 + β 2 Z2 + β 3 Z3 + β 4 Z4 + β 5 Z5
Note that this model is still additive in terms of the new IV’s.
One way to adopt such a new model is to create the new IV’s using the transformations of existing variables.
However, the same effect can be achieved using the Custom Model statement. The details of writing a Custom
Model will be presented later, but we note in passing that the above model would be written as
X1 X 2 X1 * X1 X1 * X 2 X2 * X2
305-2
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Unfortunately, we will obtain completely different results if we recode A to 2, B to 3, and C to 1. Thus, a direct
recode of letters to numbers will not work.
To convert a categorical variable to a form usable in regression analysis, we have to create a new set of numeric
variables. If a categorical variable has k values, k - 1 new variables must be generated.
There are many ways in which these new variables may be generated. We will present a few examples here.
Indicator Variables
Indicator (dummy or binary) variables are a popular type of generated variables. They are created as follows. A
reference value is selected. Usually, the most common value is selected as the reference value. Next, a variable is
generated for each of the values other than the reference value. For example, suppose that C is selected as the
reference value. An indicator variable is generated for each of the remaining values: A and B. The value of the
indicator variable is one if the value of the original variable is equal to the value of interest, or zero otherwise.
Here is how the original variable T and the two new indicator variables TA and TB look in a short example.
T TA TB
A 1 0
A 1 0
B 0 1
B 0 1
C 0 0
C 0 0
The generated IV’s, TA and TB, would be used in the regression model.
Contrast Variables
Contrast variables are another popular type of generated variables. Several types of contrast variables can be
generated. We will present a few here. One method is to contrast each value with the reference value. The value
of interest receives a one. The reference value receives a negative one. All other values receive a zero.
Continuing with our example, one set of contrast variables is
T CA CB
A 1 0
A 1 0
B 0 1
B 0 1
C -1 -1
C -1 -1
The generated IV’s, CA and CB, would be used in the regression model.
Another set of contrast variables that is commonly used is to compare each value with those remaining. For this
example, we will suppose that T takes on four values: A, B, C, and D. The generate variables are
T C1 C2 C3
A -3 0 0
A -3 0 0
B 1 -2 0
B 1 -2 0
C 1 1 -1
C 1 1 -1
D 1 1 1
D 1 1 1
305-3
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Many other methods have been developed to provide meaningful numeric variables that represent categorical
variable. We have presented these because they may be generated automatically by NCSS.
305-4
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
When the variables, CAS1, CAS2, CBS1, and CBS2 are added to the regression model, they will account for the
interaction between T and S.
Description
The analyst is seeking to find an equation that describes or summarizes the relationships in a set of data. This
purpose makes the fewest assumptions.
Coefficient Estimation
This is a popular reason for doing regression analysis. The analyst may have a theoretical relationship in mind,
and the regression analysis will confirm this theory. Most likely, there is specific interest in the magnitudes and
signs of the coefficients. Frequently, this purpose for regression overlaps with others.
Prediction
The prime concern here is to predict some response variable, such as sales, delivery time, efficiency, occupancy
rate in a hospital, reaction yield in some chemical process, or strength of some metal. These predictions may be
very crucial in planning, monitoring, or evaluating some process or system. There are many assumptions and
qualifications that must be made in this case. For instance, you must not extrapolate beyond the range of the data.
Also, interval estimates require special, so-called normality, assumptions to hold.
Control
Regression models may be used for monitoring and controlling a system. For example, you might want to
calibrate a measurement system or keep a response variable within certain guidelines. When a regression model is
used for control purposes, the independent variables must be related to the dependent in a causal way.
Furthermore, this functional relationship must continue over time. If it does not, continual modification of the
model must occur.
Assumptions
The following assumptions must be considered when using multiple regression analysis.
Linearity
Multiple regression models the linear (straight-line) relationship between Y and the X’s. Any curvilinear
relationship is ignored. This is most easily evaluated by scatter plots early on in your analysis. Nonlinear patterns
can show up in residual plots.
305-5
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Constant Variance
The variance of the ε ' s is constant for all values of the X’s. This can be detected by residual plots of ej versus y j
or the X’s. If these residual plots show a rectangular shape, we can assume constant variance. On the other hand, if
a residual plot shows an increasing or decreasing wedge or bowtie shape, nonconstant variance exists and must be
corrected.
Special Causes
We assume that all special causes, outliers due to one-time situations, have been removed from the data. If not,
they may cause nonconstant variance, nonnormality, or other problems with the regression model.
Normality
We assume the ε ' s are normally distributed when hypothesis tests and confidence limits are to be used.
Independence
The ε ' s are assumed to be uncorrelated with one another, which implies that the Y’s are also uncorrelated. This
assumption can be violated in two ways: model misspecification or time-sequenced data.
1. Model misspecification. If an important independent variable is omitted or if an incorrect functional form
is used, the residuals may not be independent. The solution to this dilemma is to find the proper
functional form or to include the proper independent variables.
2. Time-sequenced data. Whenever regression analysis is performed on data taken over time (frequently
called time series data), the residuals are often correlated. This correlation among residuals is called serial
correlation or autocorrelation. Positive autocorrelation means that the residual in time period j tends to
have the same sign as the residual in time period (j-k), where k is the lag in time periods. On the other
hand, negative autocorrelation means that the residual in time period j tends to have the opposite sign as
the residual in time period (j-k).
The presence of autocorrelation among the residuals has several negative impacts:
1. The regression coefficients are unbiased but no longer efficient, i.e., minimum variance estimates.
2. With positive serial correlation, the mean square error may be seriously underestimated. The impact of
this is that the standard errors are underestimated, the partial t-tests are inflated (show significance when
there is none), and the confidence intervals are shorter than they should be.
3. Any hypothesis tests or confidence limits that required the use of the t or F distribution would be invalid.
You could try to identify these serial correlation patterns informally, with the residual plots versus time. A better
analytical way would be to compute the serial or autocorrelation coefficient for different time lags and compare it
to a critical value.
Multicollinearity
Collinearity, or multicollinearity, is the existence of near-linear relationships among the set of independent
variables. The presence of multicollinearity causes all kinds of problems with regression analysis, so you could
say that we assume the data do not exhibit it.
Effects of Multicollinearity
Multicollinearity can create inaccurate estimates of the regression coefficients, inflate the standard errors of the
regression coefficients, deflate the partial t-tests for the regression coefficients, give false nonsignificant p-values,
and degrade the predictability of the model.
305-6
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Sources of Multicollinearity
To deal with collinearity, you must be able to identify its source. The source of the collinearity impacts the
analysis, the corrections, and the interpretation of the linear model. There are five sources (see Montgomery
[1982] for details):
1. Data collection. In this case, the data has been collected from a narrow subspace of the independent
variables. The collinearity has been created by the sampling methodology. Obtaining more data on an
expanded range would cure this collinearity problem.
2. Physical constraints of the linear model or population. This source of collinearity will exist no matter
what sampling technique is used. Many manufacturing or service processes have constraints on
independent variables (as to their range), either physically, politically, or legally, which will create
collinearity.
3. Over-defined model. Here, there are more variables than observations. This situation should be avoided.
4. Model choice or specification. This source of collinearity comes from using independent variables that
are higher powers or interactions of an original set of variables. It should be noted that if sampling
subspace of Xj is narrow, then any combination of variables with xj will increase the collinearity problem
even further.
5. Outliers. Extreme values or outliers in the X-space can cause collinearity as well as hide it.
Detection of Collinearity
The following steps for detecting collinearity proceed from simple to complex.
1. Begin by studying pairwise scatter plots of pairs of independent variables, looking for near-perfect
relationships. Also glance at the correlation matrix for high correlations. Unfortunately, multicollinearity
does not always show up when considering the variables two at a time.
2. Next, consider the variance inflation factors (VIF). Large VIF’s flag collinear variables.
3. Finally, focus on small eigenvalues of the correlation matrix of the independent variables. An eigenvalue
of zero or close to zero indicates that an exact linear dependence exists. Instead of looking at the
numerical size of the eigenvalue, use the condition number. Large condition numbers indicate
collinearity.
Correction of Collinearity
Depending on what the source of collinearity is, the solutions will vary. If the collinearity has been created by the
data collection, then collect additional data over a wider X-subspace. If the choice of the linear model has
accented the collinearity, simplify the model by variable selection techniques. If an observation or two has
induced the collinearity, remove those observations and proceed accordingly. Above all, use care in selecting the
variables at the outset.
Centering and Scaling Issues in Collinearity
When the variables in regression are centered (by subtracting their mean) and scaled (by dividing by their
standard deviation), the resulting X'X matrix is in correlation form. The centering of each independent variable has
removed the constant term from the collinearity diagnostics. Scaling and centering permit the computation of the
collinearity diagnostics on standardized variables. On the other hand, there are many regression applications
where the intercept is a vital part of the linear model. The collinearity diagnostics on the uncentered data may
provide a more realistic picture of the collinearity structure in these cases.
305-7
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Introduction
Now comes the fun part: running the program. NCSS is designed to be simple to operate, but it can still seem
complicated. When you go to run a procedure such as this for the first time, take a few minutes to read through
the chapter again and familiarize yourself with the issues involved.
Enter Variables
The NCSS panels are set with ready-to-run defaults, but you have to select the appropriate variables (columns of
data). There should be only one dependent variable and one or more independent variables enumerated. In
addition, if a weight variable is available from a previous analysis, it needs to be specified.
305-8
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Specify Alpha
Most beginners at statistics forget this important step and let the alpha value default to the standard 0.05. You
should make a conscious decision as to what value of alpha is appropriate for your study. The 0.05 default came
about during the dark ages when people had to rely on printed probability tables and there were only two values
available: 0.05 or 0.01. Now you can set the value to whatever is appropriate.
Introduction
Once the regression output is displayed, you will be tempted to go directly to the probability of the F-test from the
regression analysis of variance table to see if you have a significant result. However, it is very important that you
proceed through the output in an orderly fashion. The main conditions to check for relate to linearity, normality,
constant variance, independence, outliers, multicollinearity, and predictability. Return to the statistical sections
and plot descriptions for more detailed discussions.
Check 1. Linearity
• Look at the Residual vs. Predicted plot. A curving pattern here indicates nonlinearity.
• Look at the Residual vs. Predictor plots. A curving pattern here indicates nonlinearity.
• Look at the Y versus X plots. For simple linear regression, a linear relationship between Y and X in a scatter
plot indicates that the linearity assumption is appropriate. The same holds if the dependent variable is plotted
against each independent variable in a scatter plot.
• If linearity does not exist, take the appropriate action and return to Step 2. Appropriate action might be to add
power terms (such as Log(X), X squared, or X cubed) or to use an appropriate nonlinear model.
Check 2. Normality
• Look at the Normal Probability Plot. If all of the residuals fall within the confidence bands for the Normal
Probability Plot, the normality assumption is likely met. One or two residuals outside the confidence bands
may be an indicator of outliers, not nonnormality.
• Look at the Normal Assumptions Section. The formal normal goodness of fit tests are given in the Normal
Assumptions Section. If the decision is accepted for the Normality (Omnibus) test, there is no evidence that
the residuals are not normal.
• If normality does not exist, take the appropriate action and return to Step 2. Appropriate action includes
removing outliers and/or using the logarithm of the dependent variable.
305-9
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Check 5. Outliers
• Look at the Regression Diagnostics Section. Any observations with an asterisk by the diagnostics RStudent,
Hat Diagonal, DFFITS, or the CovRatio, are potential outliers. Observations with a Cook’s D greater than
1.00 are also potentially influential.
• Look at the Dfbetas Section. Any Dfbetas beyond the cutoff of ± 2 N indicate influential observations.
• Look at the Rstudent vs. Hat Diagonal plot. This plot will flag an observation that may be jointly influential
by both diagnostics.
• If outliers do exist in the model, go to robust regression and run one of the options there to confirm these
outliers. If the outliers are to be deleted or down weighted, return to Step 2.
Check 6. Multicollinearity
• Look at the Multicollinearity Section. If any variable has a variance inflation factor greater than 10,
collinearity could be a problem.
• Look at the Eigenvalues of Centered Correlations Section. Condition numbers greater than 1000 indicate
severe collinearity. Condition numbers between 100 and 1000 imply moderate to strong collinearity.
• Look at the Correlation Matrix Section. Strong pairwise correlation here may give some insight as to the
variables causing the collinearity.
• If multicollinearity does exist in the model, it could be due to an outlier (return to Check 5 and then Step 2) or
due to strong interdependencies between independent variables. In the latter case, return to Step 2 and try a
different variable selection procedure.
Check 7. Predictability
• Look at the PRESS Section. If the Press R2 is almost as large as the R2, you have done as well as could be
expected. It is not unusual in practice for the Press R2 to be half of the R2. If R2 is 0.50, a Press R2 of 0.25
would be unacceptable.
• Look at the Predicted Values with Confidence Limits for Means and Individuals. If the confidence limits are
too wide to be practical, you may need to add new variables or reassess the outlier and collinearity
possibilities.
• Look at the Residual Report. Any observation that has percent error grossly deviant from the values of most
observations is an indication that this observation may be impacting predictability.
• Any changes in the model due to poor predictability require a return to Step 2.
305-10
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
y1 1 x11 x1 p e1 1
b0
b1
Y = y j , X = 1 x1 j x pj , e = e j , 1 = 1 , b =
b p
y N 1 x1N x pN
N
e
1
w1 0 0 0
0 0 0
W = 0 0 wj 0 0
0 0 0
0 0 0 wN
Least Squares
Using this notation, the least squares estimates are found using the equation.
b = (X ′WX ) X' WY
−1
Note that when the weights are not used, this reduces to
b = (X ′X ) X' Y
−1
305-11
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Estimated Variances
An estimate of the variance of the residuals is computed using
e' We
s2 =
N − p −1
An estimate of the variance of the regression coefficients is calculated using
b0
b
V = s 2 (X' WX )
1 −1
bp
An estimate of the variance of the predicted mean of Y at a specific value of X, say X 0 , is given by
−1 1
sY2m |X 0 = s 2 (1, X 0 )(X' WX )
X0
An estimate of the variance of the predicted value of Y for an individual for a specific value of X, say X 0 , is given
by
sY2I |X 0 = s 2 + sY2m |X 0
Usually, the hypothesized value of Bi is zero, but this does not have to be the case.
bi ± (t1−α / 2, N − p −1 )sbi
A 100(1 − α )% prediction interval for the value of Y for an individual at a specific value of X, say X 0 , is given by
305-12
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
variation in Y that is accounted by the variation in the independent variables. R2 varies between zero (no linear
relationship) and one (perfect linear relationship).
R2, officially known as the coefficient of determination, is defined as the sum of squares due to the regression
divided by the adjusted total sum of squares of Y. The formula for R2 is
R2 = 1 −
e' We
−
(1' WY )2
Y' WY
1' W1
SS
= Model
SSTotal
R2 is probably the most popular measure of how well a regression model fits the data. R2 may be defined either as
a ratio or a percentage. Since we use the ratio form, its values range from zero to one. A value of R2 near zero
indicates no linear relationship, while a value near one indicates a perfect linear fit. Although popular, R2 should
not be used indiscriminately or interpreted without scatter plot support. Following are some qualifications on its
interpretation:
1. Additional independent variables. It is possible to increase R2 by adding more independent variables, but
the additional independent variables may actually cause an increase in the mean square error, an
unfavorable situation. This usually happens when the sample size is small.
2. Range of the independent variables. R2 is influenced by the range of the independent variables. R2
increases as the range of the X’s increases and decreases as the range of the X’s decreases.
3. Slope magnitudes. R2 does not measure the magnitude of the slopes.
4. Linearity. R2 does not measure the appropriateness of a linear model. It measures the strength of the linear
component of the model. Suppose the relationship between X and Y was a perfect sphere. Although there
is a perfect relationship between the variables, the R2 value would be zero.
5. Predictability. A large R2 does not necessarily mean high predictability, nor does a low R2 necessarily
mean poor predictability.
6. No-intercept model. The definition of R2 assumes that there is an intercept in the regression model. When
the intercept is left out of the model, the definition of R2 changes dramatically. The fact that your R2 value
increases when you remove the intercept from the regression model does not reflect an increase in the
goodness of fit. Rather, it reflects a change in the underlying definition of R2.
7. Sample size. R2 is highly sensitive to the number of observations. The smaller the sample size, the larger
its value.
R2 = 1−
(N − 1)(1 − R 2 )
N − p −1
305-13
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
1 – No Outliers
Outliers are observations that are poorly fit by the regression model. If outliers are influential, they will cause
serious distortions in the regression calculations. Once an observation has been determined to be an outlier, it
must be checked to see if it resulted from a mistake. If so, it must be corrected or omitted. However, if no mistake
can be found, the outlier should not be discarded just because it is an outlier. Many scientific discoveries have
been made because outliers, data points that were different from the norm, were studied more closely. Besides
being caused by simple data-entry mistakes, outliers often suggest the presence of an important independent
variable that has been ignored.
Outliers are easy to spot on scatter plots of the residuals and RStudent. RStudent is the preferred statistic for
finding outliers because each observation is omitted from the calculation making it less likely that the outlier can
mask its presence. Scatter plots of the residuals and RStudent against the X variables are also helpful because they
may show other problems as well.
3 – Constant Variance
The errors are assumed to have constant variance across all values of X. If there are a lot of data (N > 100),
nonconstant variance can be detected on the scatter plots of the residuals versus each X. However, the most direct
diagnostic tool to evaluate this assumption is a scatter plot of the absolute values of the residuals versus each X.
Often, the assumption is violated because the variance increases with X. This will show up as a ‘megaphone’
pattern on the scatter plot.
When nonconstant variance is detected, a variance-stabilizing transformation such as the square-root or logarithm
may be used. However, the best solution is probably to use weighted regression, with weights inversely
proportional to the magnitude of the residuals.
305-14
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
4 – Independent Errors
The Y’s, and thus the errors, are assumed to be independent. This assumption is usually ignored unless there is a
reason to think that it has been violated, such as when the observations were taken across time. An easy way to
evaluate this assumption is a scatter plot of the residuals versus their sequence number (assuming that the data are
arranged in time sequence order). This plot should show a relative random pattern.
The Durbin-Watson statistic is used as a formal test for the presence of first-order serial correlation. A more
comprehensive method of evaluation is to look at the autocorrelations of the residuals at various lags. Large
autocorrelations are found by testing each using Fisher’s z transformation. Although Fisher’s z transformation is
only approximate in the case of autocorrelations, it does provide a reasonable measuring stick with which to judge
the size of the autocorrelations.
If independence is violated, confidence intervals and hypothesis tests are erroneous. Some remedial method that
accounts for the lack of independence must be adopted, such as using first differences or the Cochrane-Orcutt
procedure.
Durbin-Watson Test
The Durbin-Watson test is often used to test for positive or negative, first-order, serial correlation. It is calculated
as follows
∑ (e )
N
2
j − e j −1
j=2
DW = N
∑e
j =1
2
j
The distribution of this test is difficult because it involves the X values. Originally, Durbin-Watson (1950, 1951)
gave a pair of bounds to be used. However, there is a large range of ‘inclusion’ found when using these bounds.
Instead of using these bounds, we calculate the exact probability using the beta distribution approximation
suggested by Durbin-Watson (1951). This approximation has been shown to be accurate to three decimal places in
most cases which is all that are needed for practical work.
5 – Normality of Residuals
The residuals are assumed to follow the normal probability distribution with zero mean and constant variance.
This can be evaluated using a normal probability plot of the residuals. Also, normality tests are used to evaluate
this assumption. The most popular of the five normality tests provided is the Shapiro-Wilk test.
Unfortunately, a breakdown in any of the other assumptions results in a departure from this assumption as well.
Hence, you should investigate the other assumptions first, leaving this assumption until last.
Influential Observations
Part of the evaluation of the assumptions includes an analysis to determine if any of the observations have an
extra large influence on the estimated regression coefficients, on the fit of the model, or on the value of Cook’s
distance. By looking at how much removing an observation changes the results, an observation’s influence can be
determined.
Five statistics are used to investigate influence. These are Hat diagonal, DFFITS, DFBETAS, Cook’s D, and
COVARATIO.
305-15
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Residual
The residual is the difference between the actual Y value and the Y value predicted by the estimated regression
model. It is also called the error, the deviate, or the discrepancy.
𝑒𝑗 = 𝑦𝑗− 𝑦�𝑗
Although the true errors, εj, are assumed to be independent, the computed residuals, ej, are not. Although the lack
of independence among the residuals is a concern in developing theoretical tests, it is not a concern on the plots
and graphs.
By assumption, the variance of the εj is σ2. However, the variance of the ej is not σ2. In vector notation, the
covariance matrix of e is given by
(
V( e) = σ 2 I − W 2 X( X' WX) X' W 2
1 −1 1
)
= σ 2 (I − H )
The matrix H is called the hat matrix since it puts the ‘hat’ on y as is shown in the unweighted case.
Y = Xb
= X( X' X) X' Y
−1
= HY
Hence, the variance of ej is given by
( ) (
V e j = σ 2 1 − h jj )
where hjj is the jth diagonal element of H. This variance is estimated using
( )
e = s2 1 − h
V j jj( )
Hat Diagonal
The hat diagonal, hjj, is the jth diagonal element of the hat matrix, H where
H captures an observation’s remoteness in the X-space. Some authors refer to the hat diagonal as a measure of
leverage in the X-space. As a rule of thumb, hat diagonals greater than 4/N are considered influential and are
called high-leverage observations.
Note that a high-leverage observation is not a bad observation. Rather, high-leverage observations exert extra
influence on the final results, so care should be taken to insure that they are correct. You should not delete an
observation just because it has a high-influence. However, when you interpret the regression equation, you should
bear in mind that the results may be due to a few, high-leverage observations.
Standardized Residual
As shown above, the variance of the observed residuals is not constant. This makes comparisons among the
residuals difficult. One solution is to standardize the residuals by dividing by their standard deviations. This will
give a set of residuals with constant variance.
305-16
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
s(j) or MSEi
This is the value of the mean squared error calculated without observation j. The formula for s(j) is given by
N
∑ wi ( yi − x ′ib( j ))
1
s( j ) =
2
N − p − 1 i =1, i ≠ j
w j e2j
( N − p) s 2
−
1 − h jj
=
N − p −1
RStudent
Rstudent is similar to the studentized residual. The difference is the s(j) is used rather than s in the denominator.
The quantity s(j) is calculated using the same formula as s, except that observation j is omitted. The hope is that
be excluding this observation, a better estimate of σ 2 will be obtained. Some statisticians refer to these as the
studentized deleted residuals.
ej
tj =
s( j ) 1 − h jj
If the regression assumptions of normality are valid, a single value of the RStudent has a t distribution with N - 2
degrees of freedom. It is reasonable to consider |RStudent| > 2 as outliers.
DFFITS
DFFITS is the standardized difference between the predicted value with and without that observation. The
formula for DFFITS is
y j − y j ( j )
DFFITS j =
s( j ) h jj
h jj
= tj
1 − h jj
The values of y j ( j ) and s 2 ( j ) are found by removing observation j before the doing the calculations. It
represents the number of estimated standard errors that the fitted value changes if the jth observation is omitted
from the data set. If |DFFITS| > 1, the observation should be considered to be influential with regards to
prediction.
305-17
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Cook’s D
The DFFITS statistic attempts to measure the influence of a single observation on its fitted value. Cook’s distance
(Cook’s D) attempts to measure the influence each observation on all N fitted values. The formula for Cook’s D is
∑ w [ y ]
N
− y j ( i)
2
j j
Dj = i =1
ps2
The 𝑦�𝑗 (𝑖) are found by removing observation i before the calculations. Rather than go to all the time of
recalculating the regression coefficients N times, we use the following approximation
2
w j e j h jj
Dj =
ps 2 (1 − h jj )
2
CovRatio
This diagnostic flags observations that have a major impact on the generalized variance of the regression
coefficients. A value exceeding 1.0 implies that the ith observation provides an improvement, i.e., a reduction in
the generalized variance of the coefficients. A value of CovRatio less than 1.0 flags an observation that increases
the estimated generalized variance. This is not a favorable condition.
The general formula for the CovRatio is
CovRatio j =
[
det s( j )2 ( X( j )' WX( j ))
−1
]
[
det s 2 ( X' WX)
−1
]
p
1 s( j )2
=
1 − h jj s 2
Belsley, Kuh, and Welsch (1980) give the following guidelines for the CovRatio.
If CovRatio > 1 + 3p / N then omitting this observation significantly damages the precision of at least some of the
regression estimates.
If CovRatio < 1 - 3p / N then omitting this observation significantly improves the precision of at least some of the
regression estimates.
DFBETAS
The DFBETAS criterion measures the standardized change in a regression coefficient when an observation is
omitted. The formula for this criterion is
b k − bk ( j )
DFBETAS kj =
s( j ) c kk
Belsley, Kuh, and Welsch (1980) recommend using a cutoff of 2 / N when N is greater than 100. When N is
less than 100, others have suggested using a cutoff of 1.0 or 2.0 for the absolute value of DFBETAS.
305-18
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Press Value
PRESS is an acronym for prediction sum of squares. It was developed for use in variable selection to validate a
regression model. To calculate PRESS, each observation is individually omitted. The remaining N - 1
observations are used to calculate a regression and estimate the value of the omitted observation. This is done N
times, once for each observation. The difference between the actual Y value and the predicted Y with the
observation deleted is called the prediction error or PRESS residual. The sum of the squared prediction errors is
the PRESS value. The smaller PRESS is, the better the predictability of the model.
The formula for PRESS is
∑w [y ]
N
− y j ( j )
2
PRESS = j j
j =1
Press R-Squared
The PRESS value above can be used to compute an R2-like statistic, called R2Predict, which reflects the
prediction ability of the model. This is a good way to validate the prediction of a regression model without
selecting another sample or splitting your data. It is very possible to have a high R2 and a very low R2Predict.
When this occurs, it implies that the fitted model is data dependent. This R2Predict ranges from below zero to
above one. When outside the range of zero to one, it is truncated to stay within this range.
PRESS
R predict = 1 −
2
SS tot
∑ PRESS = ∑w j y j − y j ( j )
j =1
Bootstrapping
Bootstrapping was developed to provide standard errors and confidence intervals for regression coefficients and
predicted values in situations in which the standard assumptions are not valid. In these nonstandard situations,
bootstrapping is a viable alternative to the corrective action suggested earlier. The method is simple in concept,
but it requires extensive computation time.
The bootstrap is simple to describe. You assume that your sample is actually the population and you draw B
samples (B is over 1000) of size N from your original sample with replacement. With replacement means that
each observation may be selected more than once. For each bootstrap sample, the regression results are computed
and stored.
Suppose that you want the standard error and a confidence interval of the slope. The bootstrap sampling process
has provided B estimates of the slope. The standard deviation of these B estimates of the slope is the bootstrap
estimate of the standard error of the slope. The bootstrap confidence interval is found the arranging the B values in
sorted order and selecting the appropriate percentiles from the list. For example, a 90% bootstrap confidence
interval for the slope is given by fifth and ninety-fifth percentiles of the bootstrap slope values. The bootstrap
method can be applied to many of the statistics that are computed in regression analysis.
The main assumption made when using the bootstrap method is that your sample approximates the population
fairly well. Because of this assumption, bootstrapping does not work well for small samples in which there is little
305-19
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
likelihood that the sample is representative of the population. Bootstrapping should only be used in medium to
large samples.
When applied to linear regression, there are two types of bootstrapping that can be used.
Modified Residuals
Davison and Hinkley (1999) page 279 recommend the use of a special rescaling of the residuals when
bootstrapping to keep results unbiased. These modified residuals are calculated using
ej
e*j = − e*
1 − h jj
wj
where
N
∑w e
j =1
*
j j
e* = N
∑w
j =1
j
( )
y + = y − ∑ xi bi* − bi + e+*
where e+* is a randomly selected modified residual. By adding the randomly sample residual we have added an
appropriate amount of variation to represent the variance of individual Y’s about their mean value.
Data Structure
The data are entered in two or more columns. An example of data appropriate for this procedure is shown below.
These data are from a study of the relationship of several variables with a person’s I.Q. Fifteen people were
studied. Each person’s IQ was recorded along with scores on five different personality tests. The data are
contained in the IQ dataset. We suggest that you open this database now so that you can follow along with the
example.
305-20
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
IQ dataset
Test1 Test2 Test3 Test4 Test5 IQ
83 34 65 63 64 106
73 19 73 48 82 92
54 81 82 65 73 102
96 72 91 88 94 121
84 53 72 68 82 102
86 72 63 79 57 105
76 62 64 69 64 97
54 49 43 52 84 92
37 43 92 39 72 94
42 54 96 48 83 112
71 63 52 69 42 130
63 74 74 71 91 115
69 81 82 75 54 98
81 89 64 85 62 96
50 75 72 64 45 103
Missing Values
Rows with missing values in the variables being analyzed are ignored. If data are present on a row for all but the
dependent variable, a predicted value and confidence limits are generated for that row.
Procedure Options
This section describes the options available in this procedure.
Dependent Variable
Y
This option specifies one or more dependent (Y) variables. If more than one variable is specified, a separate
analysis is run for each.
Independent Variables
Numeric X’s
Specify numeric independent (also called regressor, explanatory, or predictor) variables here. Numeric variables
are those whose values are numeric and are at least ordinal. Nominal variables, even when coded with numbers,
should be specified as Categorical Independent Variables. Although you may specify binary (0-1) variables here,
they are more appropriately analyzed when you specify them as Categorical X’s.
If you want to create powers and cross-products of these variables, specify an appropriate model below in the
Regression Model section.
If you want to create predicted values of Y for values of X not in your database, add the X new values as rows to
the bottom of the database, with the value of Y blank. These rows will not be used during estimation phase, but
predicted values will be generated for them on the reports.
305-21
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Categorical X’s
Specify categorical (nominal or group) independent variables in this box. By categorical we mean that the
variable has only a few unique, numeric or text, values like 1, 2, 3 or Yes, No, Maybe. The values are used to
identify categories.
Regression analysis is only defined for numeric variables. Since categorical variables are nominal, they cannot be
used directly in regression. Instead, an internal set of numeric variables must be substituted for each categorical
variable.
Suppose a categorical variable has G categories. NCSS automatically generates the G-1 internal, numeric
variables for the analysis. The way these internal variables are created is determined by the Recoding Scheme
and, if needed, the Reference Value. These options can be entered separately with each categorical variable, or
they can specified using a default value (see Default Recoding Scheme and Default Reference Value below).
The syntax for specifying a categorical variable is VarName(CType; RefValue) where VarName is the name of the
variable, CType is the recoding scheme, and RefValue is the reference value, if needed.
CType
The recoding scheme is entered as a letter. Possible choices are B, P, R, N, S, L, F, A, 1, 2, 3, 4, 5, or E. The
meaning of each of these letters is as follows.
• P for Polynomial of up to 5th order (you cannot use this option with category variables with more than 6
categories.
Example: Categorical variable Z with 4 categories.
Z P1 P2 P3
1 -3 1 -1
3 -1 -1 3
5 1 -1 -3
7 3 1 1
• R to compare each with the reference value (the group with the reference value is skipped).
Example: Categorical variable Z with 4 categories. Category D is the reference value.
Z C1 C2 C3
A 1 0 0
B 0 1 0
C 0 0 1
D -1 -1 -1
305-22
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
• A to compare each with the average of all categories (the Reference Value is skipped).
Example: Categorical variable Z with 4 categories. Suppose the reference value is 3.
Z S1 S2 S3
1 -3 1 1
3 1 1 1
5 1 -3 1
7 1 1 -3
305-23
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
RefValue
A second, optional argument is the reference value. The reference value is one of the categories. The other
categories are compared to it, so it is usually a baseline or control value. If neither a baseline or control value is
evident, the reference value is the most frequent value.
For example, suppose you want to include a categorical independent variable, State, which has four values: Texas,
California, Florida, and NewYork. Suppose the recoding scheme is specified as Compare Each with Reference
Value with the reference value of California. You would enter
State(R;California)
305-24
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
• Polynomial of up to 5th order (you cannot use this option with category variables with more than 6
categories.
Example: Categorical variable Z with 4 categories.
Z P1 P2 P3
1 -3 1 -1
3 -1 -1 3
5 1 -1 -3
7 3 1 1
• Compare Each with Reference Value (the group with the reference value is skipped).
Example: Categorical variable Z with 4 categories. Category D is the reference value.
Z C1 C2 C3
A 1 0 0
B 0 1 0
C 0 0 1
D -1 -1 -1
305-25
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
305-26
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Weight Variable
Weights
When used, this is the name of a variable containing observation weights for generating a weighted-regression
analysis. These weight values should be non-negative.
Regression Model
These options control which terms are included in the regression model.
Terms
This option specifies which terms (terms, powers, cross-products, and interactions) are included in the regression
model. For a straight-forward regression model, select Up to 1-Way.
The options are
• Up to 1-Way
This option generates a model in which each variable is represented by a single model term. No cross-
products, interactions, or powers are added. Use this option when you want to use the variables you have
specified, but you do not want to generate other terms.
This is the option to select when you want to analyze the independent variables specified without adding any
other terms.
305-27
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
For example, if you have three independent variables A, B, and C, this would generate the model:
A+B+C
• Up to 2-Way
This option specifies that all individual variables, two-way interactions, and squares of numeric variables are
included in the model. For example, if you have three numeric variables A, B, and C, this would generate the
model:
A + B + C + A*B + A*C + B*C + A*A + B*B + C*C
On the other hand, if you have three categorical variables A, B, and C, this would generate the model:
A + B + C + A*B + A*C + B*C
• Up to 3-Way
All individual variables, two-way interactions, three-way interactions, squares of numeric variables, and
cubes of numeric variables are included in the model. For example, if you have three numeric, independent
variables A, B, and C, this would generate the model:
A + B + C + A*B + A*C + B*C + A*B*C + A*A + B*B + C*C + A*A*B + A*A*C + B*B*C +A*C*C +
B*C*C
On the other hand, if you have three categorical variables A, B, and C, this would generate the model:
A + B + C + A*B + A*C + B*C + A*B*C
• Up to 4-Way
All individual variables, two-way interactions, three-way interactions, and four-way interactions are included
in the model. Also included would be squares, cubes, and quartics of numeric variables and their cross-
products.
For example, if you have four categorical variables A, B, C, and D, this would generate the model:
A + B + C + D + A*B + A*C + A*D + B*C + B*D + C*D + A*B*C + A*B*D + A*C*D + B*C*D +
A*B*C*D
• Interaction
Mainly used for categorical variables. A saturated model (all terms and their interactions) is generated. This
requires a dataset with no missing categorical-variable combinations (you can have unequal numbers of
observations for each combination of the categorical variables). No squares, cubes, etc. are generated.
For example, if you have three independent variables A, B, and C, this would generate the model:
A + B + C + A*B + A*C + B*C + A*B*C
Note that the discussion of the Custom Model option discusses the interpretation of this model.
• Custom Model
The model specified in the Custom Model box is used.
Remove Intercept
Unchecked indicates that the intercept term, β0 , is to be included in the regression. Checked indicates that the
intercept should be omitted from the regression model. Note that deleting the intercept distorts most of the
diagnostic statistics (R2, etc.). In most situations, you should include the intercept in the model.
Replace Custom Model with Preview Model (button)
When this button is pressed, the Custom Model is cleared and a copy of the Preview model is stored in the
Custom Model. You can then edit this Custom Model as desired.
305-28
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Syntax
A model is written by listing one or more terms. The terms are separated by a blank or plus sign. Terms include
variables and interactions. Specify regular variables (main effects) by entering the variable names. Specify
interactions by listing each variable in the interaction separated by an asterisk (*), such as Fruit*Nuts or A*B*C.
You can use the bar (|) symbol as a shorthand technique for specifying many interactions quickly. When several
variables are separated by bars, all of their interactions are generated. For example, A|B|C is interpreted as A + B
+ C + A*B + A*C + B*C + A*B*C.
You can use parentheses. For example, A*(B+C) is interpreted as A*B + A*C.
Some examples will help to indicate how the model syntax works:
A|B = A + B + A*B
A|B A*A B*B = A + B + A*B + A*A + B*B
Note that you should only repeat numeric variables. That is, A*A is valid for a numeric variable, but not for a
categorical variable.
A|A|B|B (Max Term Order=2) = A + B + A*A + A*B + B*B
A|B|C = A + B + C + A*B + A*C + B*C + A*B*C
(A + B)*(C + D) = A*C + A*D + B*C + B*D
(A + B)|C = (A + B) + C + (A + B)*C = A + B + C + A*C + B*C
Reports Tab
The following options control which reports and plots are displayed. Since over 30 reports are available, you may
want to spend some time deciding which reports to display on a routine basis and create a template that saves your
favorite choices.
305-29
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Report Options
Show All Rows
This option makes it possible to display predicted values for only a few designated rows.
When checked predicted values, residuals, and other row-by-row statistics, will be displayed for all rows used in
the analysis.
When not checked, predicted values and other row-by-row statistics will be displayed for only those rows in
which the dependent variable’s value is missing.
• Percentile
The confidence limits are the corresponding percentiles of the bootstrap values.
305-30
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
• Reflection
The confidence limits are formed by reflecting the percentile limits. If X0 is the original value of the
parameter estimate and XL and XU are the percentile confidence limits, the reflection interval is (2 X0 - XU, 2
X0 - XL).
Format Tab
These options specify the number of decimal places shown when the indicated value is displayed in a report. The
number of decimal places shown in plots is controlled by the Tick Label Settings buttons on the Axes tabs.
Report Options
Precision
This option is used when the number of decimal places is set to All. It specifies whether numbers are displayed as
single (7-digit) or double (13-digit) precision numbers in the output. All calculations are performed in double
precision regardless of the Precision selected here.
Single
Unformatted numbers are displayed with 7-digits
Double
Unformatted numbers are displayed with 13-digits. This option is most often used when the extremely accurate
results are needed for further calculation. For example, double precision might be used when you are going to use
the Multiple Regression model in a transformation.
Double Precision Format Misalignment
Double precision numbers may require more space than is available in the output columns, causing column
alignment problems. The double precision option is for those instances when accuracy is more important than
format alignment.
Variable Names
This option lets you select whether to display variable names, variable labels, or both.
Stagger label and output if label length is ≥
When writing a row of information to a report, some variable names/labels may be too long to fit in the space
allocated. If the name (or label) contains more characters than specified here, the rest of the output for that line is
moved down to the next line. Most reports are designed to hold a label of up to 15 characters.
Enter 1 when you always want each row’s output to be printed on two lines.
Enter 100 when you want each row printed on only one line. Note that this may cause some columns to be miss-
aligned.
Decimal Places
Probability ... Mean Square Decimals
Specify the number of digits after the decimal point to display on the output of values of this type. This option in
no way influences the accuracy with which the calculations are done.
All
Select All to display all digits available. The number of digits displayed by this option is controlled by whether
the Precision option is Single (7) or Double (13).
305-31
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Plots Tab
These options control the inclusion and the settings of each of the plots.
Select Plots
Histogram ... Partial Resid vs X Plot
Indicate whether to display these plots. Click the plot format button to change the plot settings.
Storage Tab
These options let you specify if, and where on the dataset, various statistics are stored.
Warning: Any data already in these variables are replaced by the new data. Be careful not to specify columns that
contain important data.
305-32
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
This report summarizes the multiple regression results. It presents the variables used, the number of rows used,
and the basic results.
305-33
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
R2
R2 , officially known as the coefficient of determination, is defined as
SS Model
R2 =
SSTotal ( Adjusted )
R2 is probably the most popular statistical measure of how well the regression model fits the data. R2 may be
defined either as a ratio or a percentage. Since we use the ratio form, its values range from zero to one. A value of
R2 near zero indicates no linear relationship between the Y and the X’s, while a value near one indicates a perfect
linear fit. Although popular, R2 should not be used indiscriminately or interpreted without scatter plot support.
Following are some qualifications on its interpretation:
1. Additional independent variables. It is possible to increase R2 by adding more independent variables, but
the additional independent variables may actually cause an increase in the mean square error, an
unfavorable situation. This case happens when your sample size is small.
2. Range of the independent variables. R2 is influenced by the range of each independent variable. R2
increases as the range of the X’s increases and decreases as the range of the X’s decreases.
3. Slope magnitudes. R2 does not measure the magnitude of the slopes.
4. Linearity. R2 does not measure the appropriateness of a linear model. It measures the strength of the linear
component of the model. Suppose the relationship between x and Y was a perfect circle. The R2 value of
this relationship would be zero.
5. Predictability. A large R2 does not necessarily mean high predictability, nor does a low R2 necessarily
mean poor predictability.
6. No-intercept model. The definition of R2 assumes that there is an intercept in the regression model. When
the intercept is left out of the model, the definition of R2 changes dramatically. The fact that your R2 value
increases when you remove the intercept from the regression model does not reflect an increase in the
goodness of fit. Rather, it reflects a change in the underlying meaning of R2.
7. Sample size. R2 is highly sensitive to the number of observations. The smaller the sample size, the larger
its value.
Adjusted R2
This is an adjusted version of R2. The adjustment seeks to remove the distortion due to a small sample size.
Coefficient of Variation
The coefficient of variation is a relative measure of dispersion, computed by dividing root mean square error by
the mean of the dependent variable. By itself, it has little value, but it can be useful in comparative studies.
MSE
CV =
y
Ave Abs Pct Error
This is the average of the absolute percent errors. It is another measure of the goodness of fit of the regression
model to the data. It is calculated using the formula
N
y j − y j
100∑
j =1 yj
AAPE =
N
Note that when the dependent variable is zero, its predicted value is used in the denominator.
305-34
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Descriptive Statistics
Standard
Variable Count Mean Deviation Minimum Maximum
Test1 15 67.933 17.392 37 96
Test2 15 61.400 19.397 19 89
Test3 15 72.333 14.734 43 96
Test4 15 65.533 13.953 39 88
Test5 15 69.933 16.153 42 94
IQ 15 104.333 11.017 92 130
For each variable, the count, arithmetic mean, standard deviation, minimum, and maximum are computed. This
report is particularly useful for checking that the correct variables were selected.
Correlation Matrix
Test1 Test2 Test3 Test4
Test1 1.0000 0.1000 -0.2608 0.7539
Test2 0.1000 1.0000 0.0572 0.7196
Test3 -0.2608 0.0572 1.0000 -0.1409
Test4 0.7539 0.7196 -0.1409 1.0000
Test5 0.0140 -0.2814 0.3473 -0.1729
IQ 0.2256 0.2407 0.0741 0.3714
Test5 IQ
Test1 0.0140 0.2256
Test2 -0.2814 0.2407
Test3 0.3473 0.0741
Test4 -0.1729 0.3714
Test5 1.0000 -0.0581
IQ -0.0581 1.0000
Pearson correlations are given for all variables. Outliers, nonnormality, nonconstant variance, and nonlinearities
can all impact these correlations. Note that these correlations may differ from pair-wise correlations generated by
the correlation matrix program because of the different ways the two programs treat rows with missing values.
The method used here is row-wise deletion.
These correlation coefficients show which independent variables are highly correlated with the dependent variable
and with each other. Independent variables that are highly correlated with one another may cause collinearity
problems.
This section reports the values and significance tests of the regression coefficients. Before using this report, check
that the assumptions are reasonable. For instance, collinearity can cause the t-tests to give false results and the
regression coefficients to be of the wrong magnitude or sign.
305-35
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Independent Variable
The names of the independent variables are listed here. The intercept is the value of the Y intercept.
Note that the name may become very long, especially for interaction terms. These long names may misalign the
report. You can force the rest of the items to be printed on the next line by using the Skip Line After option in the
Format tab. This should create a better looking report when the names are extra long.
Regression Coefficient
The regression coefficients are the least squares estimates of the parameters. The value indicates how much
change in Y occurs for a one-unit change in that particular X when the remaining X’s are held constant. These
coefficients are often called partial-regression coefficients since the effect of the other X’s is removed. These
coefficients are the values of b0 , b1 , , bp .
Standard Error
The standard error of the regression coefficient, s b j , is the standard deviation of the estimate. It is used in
hypothesis tests or confidence limits.
T-Value to test Ho: B(i)=0
This is the t-test value for testing the hypothesis that β j = 0 versus the alternative that β j ≠ 0 after removing the
influence of all other X’s. This t-value has n-p-1 degrees of freedom.
To test for a value other than zero, use the formula below. There is an easier way to test hypothesized values using
confidence limits. See the discussion below under Confidence Limits. The formula for the t-test is
bj − β j
*
tj =
sb j
Prob Level
This is the p-value for the significance test of the regression coefficient. The p-value is the probability that this t-
statistic will take on a value at least as extreme as the actually observed value, assuming that the null hypothesis is
true (i.e., the regression estimate is equal to zero). If the p-value is less than alpha, say 0.05, the null hypothesis of
equality is rejected. This p-value is for a two-tail test.
Reject H0 at 5%?
This is the conclusion reached about the null hypothesis. It will be either reject H0 at the 5% level of significance
or not.
Note that the level of significance is specified in the Alpha of C.I.’s and Tests box on the Format tab panel.
Power (5%)
Power is the probability of rejecting the null hypothesis that β j = 0 when β j = β *j ≠ 0 . The power is calculated
for the case when β *j = bj , σ 2 = s 2 , and alpha is as specified in the Alpha of C.I.’s and Tests option.
High power is desirable. High power means that there is a high probability of rejecting the null hypothesis that the
regression coefficient is zero when this is false. This is a critical measure of sensitivity in hypothesis testing. This
estimate of power is based upon the assumption that the residuals are normally distributed.
Estimated Model
This is the least squares regression line presented in double precision. Besides showing the regression model in
long form, it may be used as a transformation by copying and pasting it into the Transformation portion of the
spreadsheet.
305-36
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Note that a transformation must be less than 255 characters. Since these formulas are often greater than 255
characters in length, you must use the FILE(filename) transformation. To do so, copy the formula to a text file
using Notepad, Windows Write, or Word to receive the model text. Be sure to save the file as an unformatted text
(ASCII) file. The transformation is FILE(filename) where filename is the name of the text file, including directory
information. When the transformation is executed, it will load the file and use the transformation stored there.
Independent Variable
The names of the independent variables are listed here. The intercept is the value of the Y intercept.
Note that the name may become very long, especially for interaction terms. These long names may misalign the
report. You can force the rest of the items to be printed on the next line by using the Skip Line After option in the
Format tab. This should create a better looking report when the names are extra long.
Regression Coefficient
The regression coefficients are the least squares estimates of the parameters. The value indicates how much
change in Y occurs for a one-unit change in x when the remaining X’s are held constant. These coefficients are
often called partial-regression coefficients since the effect of the other X’s is removed. These coefficients are the
values of b0 , b1 , , bp .
Standard Error
The standard error of the regression coefficient, s b j , is the standard deviation of the estimate. It is used in
hypothesis tests and confidence limits.
Lower - Upper 95% C.L.
These are the lower and upper values of a 100(1− α )% interval estimate for β j based on a t-distribution with n-
p-1 degrees of freedom. This interval estimate assumes that the residuals for the regression model are normally
distributed.
These confidence limits may be used for significance testing values of β j other than zero. If a specific value is
not within this interval, it is significantly different from that value. Note that these confidence limits are set up as
if you are interested in each regression coefficient separately.
The formulas for the lower and upper confidence limits are:
b j ± t1-α / 2,n − p −1 sb j
Standardized Coefficient
Standardized regression coefficients are the coefficients that would be obtained if you standardized the
independent variables and the dependent variable. Here standardizing is defined as subtracting the mean and
dividing by the standard deviation of a variable. A regression analysis on these standardized variables would yield
these standardized coefficients.
305-37
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
When the independent variables have vastly different scales of measurement, this value provides a way of making
comparisons among variables. The formula for the standardized regression coefficient is:
sX j
b j, std = b j
sY
where sY and s X j are the standard deviations for the dependent variable and the jth independent variable.
Estimated Equation
IQ =
85.2403846967439 - 1.93357123818932 * Test1 - 1.65988116961152 * Test2 + 0.104954325385776 * Test3
+ 3.77837667941384 * Test4 - 0.0405775409260279 * Test5
This is the least squares regression line presented in double precision. Besides showing the regression model in
long form, it may be used as a transformation by copying and pasting it into the Transformation portion of the
spreadsheet.
Analysis of Variance
Sum of Mean Prob Power
Source DF R2 Squares Square F-Ratio Level (5%)
Intercept 1 163281.67 163281.67
Model 5 0.3991 678.15 135.63 1.195 0.3835 0.2565
Error 9 0.6009 1021.18 113.46
Total(Adjusted) 14 1.0000 1699.33 121.38
An analysis of variance (ANOVA) table summarizes the information related to the variation in data.
Source
This represents a partition of the variation in Y.
R2
This is the overall R2 of this the regression model.
DF
The degrees of freedom are the number of dimensions associated with this term. Note that each observation can
be interpreted as a dimension in n-dimensional space. The degrees of freedom for the intercept, model, error, and
adjusted total are 1, p, n-p-1, and n-1, respectively.
Sum of Squares
These are the sums of squares associated with the corresponding sources of variation. Note that these values are in
terms of the dependent variable. The formulas for each are
SS Intercept = ny
2
( )
2
SS Model = ∑ y j − y
305-38
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
( )
2
SS Error = ∑ y j − y j
= ∑( y − y)
2
SS Total j
Mean Squares
The mean square is the sum of squares divided by the degrees of freedom. This mean square is an estimated
variance. For example, the mean square error is the estimated variance of the residuals.
F-Ratio
This is the F-statistic for testing the null hypothesis that all β j = 0 . This F-statistic has p degrees of freedom for
the numerator variance and n-p-1 degrees of freedom for the denominator variance.
Prob Level
This is the p-value for the above F-test. The p-value is the probability that the test statistic will take on a value at
least as extreme as the observed value, assuming that the null hypothesis is true. If the p-value is less than α , say
0.05, the null hypothesis is rejected. If the p-value is greater than α , then the null hypothesis is accepted.
Power(5%)
Power is the probability of rejecting the null hypothesis that all the regression coefficients are zero when at least
one is not.
This analysis of variance table provides a line for each term in the model. It is especially useful when you have
categorical independent variables.
Source
This is the term from the design model.
Note that the name may become very long, especially for interaction terms. These long names may misalign the
report. You can force the rest of the items to be printed on the next line by using the Skip Line After option in the
Format tab. This should create a better looking report when the names are extra long.
DF
This is the number of degrees of freedom that the model is degrees of freedom is reduced when this term is
removed from the model. This is the numerator degrees of freedom of the F-test.
R2
This is the amount that R2 is reduced when this term is removed from the regression model.
Sum of Squares
This is the amount that the model sum of squares that are reduced when this term is removed from the model.
305-39
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Mean Square
The mean square is the sum of squares divided by the degrees of freedom.
F-Ratio
This is the F-statistic for testing the null hypothesis that all β j associated with this term are zero. This F-statistic
has DF and n-p-1 degrees of freedom.
Prob Level
This is the p-value for the above F-test. The p-value is the probability that the test statistic will take on a value at
least as extreme as the observed value, assuming that the null hypothesis is true. If the p-value is less than α , say
0.05, the null hypothesis is rejected. If the p-value is greater than α , then the null hypothesis is accepted.
Power(5%)
Power is the probability of rejecting the null hypothesis that all the regression coefficients associated with this
term are zero, assuming that the estimated values of these coefficients are their true values.
PRESS Report
From From
PRESS Regular
Parameter Residuals Residuals
Sum of Squared Residuals 2839.94 1021.18
Sum of |Residuals| 169.64 99.12
R² 0.0000 0.3991
This section reports on the PRESS statistics. The regular statistics, computed on all of the data, are provided to the
side to make comparison between corresponding values easier.
Sum of Squared Residuals
PRESS is an acronym for prediction sum of squares. It was developed for use in variable selection to validate a
regression model. To calculate PRESS, each observation is individually omitted. The remaining N - 1
observations are used to calculate a regression and estimate the value of the omitted observation. This is done N
times, once for each observation. The difference between the actual Y value and the predicted Y with the
observation deleted is called the prediction error or PRESS residual. The sum of the squared prediction errors is
the PRESS value. The smaller PRESS is, the better the predictability of the model.
( )
2
∑ y j − y j,-j
∑ y j − y j,-j
Press R2
The PRESS value above can be used to compute an R2 -like statistic, called R2Predict, which reflects the
prediction ability of the model. This is a good way to validate the prediction of a regression model without
selecting another sample or splitting your data. It is very possible to have a high R2 and a very low R2Predict.
When this occurs, it implies that the fitted model is data dependent. This R2Predict ranges from below zero to
above one. When outside the range of zero to one, it is truncated to stay within this range.
PRESS
R PRESS = 1 −
2
SS Total
305-40
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Normality Tests
Normality Tests
Test Statistic Reject
Test to Test Prob H0 at
Name H0: Normal Level 20%?
Shapiro Wilk 0.908 0.1243 Yes
Anderson Darling 0.458 0.2639 No
D'Agostino Skewness 2.033 0.0421 Yes
D'Agostino Kurtosis 1.580 0.1141 Yes
D'Agostino Omnibus 6.629 0.0364 Yes
This report gives the results of applying several normality tests to the residuals. The Shapiro-Wilk test is probably
the most popular, so it is given first. These tests are discussed in detail in the Normality Test section of the
Descriptive Statistics procedure.
This section reports the autocorrelation structure of the residuals. Of course, this report is only useful if the data
represent a time series.
Lag and Correlation
The lag, k, is the number of periods (rows) back. The correlation here is the sample autocorrelation coefficient of
lag k. It is computed as:
∑ ei- k ei
rk = for k = 1,2,...,24
∑ e2i
To test the null hypothesis that ρ k = 0 at a 5% level of significance with a large-sample normal approximation,
reject when the absolute value of the autocorrelation coefficient, r k , is greater than two over the square root of
N.
Durbin-Watson Value
The Durbin-Watson test is often used to test for positive or negative, first-order, serial correlation. It is calculated
as follows
∑ ei- k ei
rk = for k = 1,2,...,24
∑ e2i
305-41
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
The distribution of this test is mathematically difficult because it involves the X values. Originally, Durbin-Watson
(1950, 1951) gave a pair of bounds to be used. However, there is a large range of indecision that can be found when
using these bounds. Instead of using these bounds, NCSS calculates the exact probability using the beta distribution
approximation suggested by Durbin-Watson (1951). This approximation has been shown to be accurate to three
decimal places in most cases.
R2 Report
Increase in
Total R² R² if this IV Decrease in R² Partial R²
for this IV Included R² if this if this IV if Adjusted
Independent and IV's with IV's IV was was Fit for All
Variable (IV) Above Above Removed Alone Other IV's
Test1 0.0509 0.0509 0.2357 0.0509 0.2817
Test2 0.0990 0.0480 0.2414 0.0579 0.2866
Test3 0.1131 0.0142 0.0152 0.0055 0.0247
Test4 0.3964 0.2832 0.2832 0.1379 0.3203
Test5 0.3991 0.0027 0.0027 0.0034 0.0045
R2 reflects the percent of variation in Y explained by the independent variables in the model. A value of R2 near
zero indicates a complete lack of fit between Y and the Xs, while a value near one indicates a perfect fit.
In this section, various types of R2 values are given to provide insight into the variation in the dependent variable
explained either by the independent variables added in order (i.e., sequential) or by the independent variables
added last. This information is valuable in an analysis of which variables are most important.
Independent Variable
This is the name of the independent variable reported on in this row.
Total R2 for This I.V. and Those Above
This is the R2 value that would result from fitting a regression with this independent variable and those listed
above it. The IV’s below it are ignored.
R2 Increase When This IV Added to Those Above
This is the amount that this IV adds to R2 when it is added to a regression model that includes those IV’s listed
above it in the report.
R2 Decrease When This IV is Removed
This is the amount that R2 would be reduced if this IV were removed from the model. Large values here indicate
important independent variables, while small values indicate insignificant variables.
One of the main problems in interpreting these values is that each assumes all other variables are already in the
equation. This means that if two variables both represent the same underlying information, they will each seem to
be insignificant after considering the other. If you remove both, you will lose the information that either one could
have brought to the model.
R2 When This IV Is Fit Alone
This is the R2 that would be obtained if the dependent variable were only regressed against this one independent
variable. Of course, a large R2 value here indicates an important independent variable that can stand alone.
Partial R2 Adjusted For All Other IV’s
The is the square of the partial correlation coefficient. The partial R2 reflects the percent of variation in the
dependent variable explained by one independent variable controlling for the effects of the rest of the independent
variables. Large values for this partial R 2 indicate important independent variables.
305-42
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
One way of assessing the importance of an independent variable is to examine the impact on various goodness-of-
fit statistics of removing it from the model. This section provides this.
Independent Variable
This is the name of the predictor variable reported on in this row. Note that the Full Model row gives the statistics
when no variables are omitted.
R2 When IV Omitted
This is the R2 for the multiple regression model when this independent variable is omitted and the remaining
independent variables are retained. If this R2 is close to the R2 for the full model, this variable is not very
important. On the other hand, if this R2 is much smaller than that of the full model, this independent variable is
important.
MSE When IV Omitted
This is the mean square error for the multiple regression model when this IV is omitted and the remaining IV’s
are retained. If this MSE is close to the MSE for the full model, this variable may not be very important. On the
other hand, if this MSE is much larger than that of the full model, this IV is important.
Mallow's Cp When IV Omitted
Another criterion for variable selection and importance is Mallow’s Cp statistic. The optimum model will have a
Cp value close to p+1, where p is the number of independent variables. A Cp greater than (p+1) indicates that the
regression model is overspecified (contains too many variables and stands a chance of having collinearity
problems). On the other hand, a model with a Cp less than (p+1) indicates that the regression model is
underspecified (at least one important independent variable has been omitted). The formula for the Cp statistic is
as follows, where k is the maximum number of independent variables available
MSE p
C p = ( n − p − 1) [
− n − 2( p + 1)
MSE k
]
H0: B=0 Prob Level
This is the two-tail p-value for testing the significance of the regression coefficient. Most likely, you would deem
IV’s with small p-values as important. However, you must be careful here. Collinearity can cause extra large p-
values, so you must check for its presence.
R2 Of Regress. Of This IV Other X’s
This is the R2 value that would result if this independent variable were regressed on the remaining independent
variables. A high value indicates a redundancy between this IV and the other IV’s. IV’s with a high value here
(above 0.90) are candidates for omission from the model.
305-43
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
This section provides the sum of squares and correlations equivalent to the R2 Section.
Independent Variable
This is the name of the IV reported on in this row.
Sequential Sum Squares
The is the sum of squares value that would result from fitting a regression with this independent variable and
those above it. The IV’s below it are ignored.
Incremental Sum Squares
This is the amount that this predictor adds to the sum of squares value when it is added to a regression model that
includes those predictors listed above it.
Last Sum Squares
This is the amount that the model sum of squares would be reduced if this variable were removed from the model.
Simple Correlation
This is the Pearson correlation coefficient between the dependent variable and the specified independent variable.
Partial Correlation
The partial correlation coefficient is a measure of the strength of the linear relationship between Y and Xj after
adjusting for the remaining (p-1) variables.
This section examines the step-by-step effect of adding variables to the regression model.
Independent Variable
This is the name of the predictor variable reported on in this row.
Included R2
This is the R2 that would be obtained if only those IV’s on this line and above were in the regression model.
305-44
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Included F-ratio
This is an F-ratio for testing the hypothesis that the regression coefficients (β’s) for the IV’s listed on this row and
above are zero.
Included Prob Level
This is the p-value for the above F-ratio.
Omitted R2
This is the R2 for the full model minus the Included R2. This is the amount of R2 explained by the independent
variables listed below the current row. Large values indicate that there is much more to come with later
independent variables. On the other hand, small values indicate that remaining independent variables contribute
little to the regression model.
Omitted F-Ratio
This is an F-ratio for testing the hypothesis that the regression coefficients (β’s) for the variables listed below this
row are all zero. The alternative is that at least one coefficient is nonzero.
Omitted Prob Level
This is the p-value for the above F-ratio.
Multicollinearity Report
Variance R2 Diagonal
Independent Inflation Versus of X'X
Variable Factor Other I.V.'s Tolerance Inverse
Test1 39.5273 0.9747 0.0253 0.00933363145144107
Test2 35.3734 0.9717 0.0283 0.00671527719536247
Test3 1.2953 0.2280 0.7720 0.000426184099335767
Test4 80.8456 0.9876 0.0124 0.0296601215840256
Test5 1.3035 0.2329 0.7671 0.000356848255025293
This report provides information useful in assessing the amount of multicollinearity in your data.
Variance Inflation
The variance inflation factor (VIF) is a measure of multicollinearity. It is the reciprocal of 1 − RX2 , where RX2 is
the R2 obtained when this variable is regressed on the remaining IV’s. A VIF of 10 or more for large data sets
indicates a collinearity problem since the RX2 with the remaining IV’s is 90 percent. For small data sets, even
VIF’s of 5 or more can signify collinearity. Variables with a high VIF are candidates for exclusion from the
model.
1
VIF j =
1 − R 2j
305-45
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
This section gives an eigenvalue analysis of the independent variables when they have been centered and scaled.
Eigenvalue
The eigenvalues of the correlation matrix. The sum of the eigenvalues is equal to the number of IV’s. Eigenvalues
near zero indicate a high degree of is collinearity in the data.
Incremental Percent
Incremental percent is the percent this eigenvalue is of the total. In an ideal situation, these percentages would be
equal. Percents near zero indicate collinearity in the data.
Cumulative Percent
This is the running total of the Incremental Percent.
Condition Number
The condition number is the largest eigenvalue divided by each corresponding eigenvalue. Since the eigenvalues
are really variances, the condition number is a ratio of variances. Condition numbers greater than 1000 indicate a
severe collinearity problem while condition numbers between 100 and 1000 indicate a mild collinearity problem.
This report displays how the eigenvectors associated with each eigenvalue are related to the independent
variables.
No.
The number of the eigenvalue.
Eigenvalue
The eigenvalues of the correlation matrix. The sum of the eigenvalues is equal to the number of independent
variables. Eigenvalues near zero mean that there is collinearity in your data.
305-46
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Values
The rest of this report gives a breakdown of what percentage each eigenvector is of the total variation for the
regression coefficient. Hence, the percentages sum to 100 down a column.
A small eigenvalue (large condition number) along with a subset of two or more independent variables having
high variance percentages indicates a dependency involving the independent variables in that subset. This
dependency has damaged or contaminated the precision of the regression coefficients estimated in the subset. Two
or more percentages of at least 50% for an eigenvector or eigenvalue suggest a problem. For certain, when there
are two or more variance percentages greater than 90%, there is definitely a collinearity problem.
Again, take the following steps when using this table.
1. Find rows with condition numbers greater than 100 (find these in the Eigenvalues of Centered
Correlations report).
2. Scan across each row found in step 1 for two or more percentages greater than 50. If two such percentages
are found, the corresponding variables are being influenced by collinearity problems. You should remove
one and re-run your analysis.
This report gives an eigenvalue analysis of the independent variables when they have been scaled but not centered
(the intercept is included in the collinearity analysis). The eigenvalues for this situation are generally not the same
as those in the previous eigenvalue analysis. Also, the condition numbers are much higher.
Eigenvalue
The eigenvalues of the scaled, but not centered, matrix. The sum of the eigenvalues is equal to the number of
independent variables. Eigenvalues near zero mean that there is collinearity in your data.
Incremental Percent
Incremental percent is the percent this eigenvalue is of the total. In an ideal situation, these percentages would be
equal. Percents near zero mean that there is collinearity in your data.
Cumulative Percent
This is the running total of the Incremental Percent.
Condition Number
The condition number is the largest eigenvalue divided by each corresponding eigenvalue. Since the eigenvalues
are really variances, the condition number is a ratio of variances. There has not been any formalization of rules on
condition numbers for uncentered matrices. You might use the criteria mentioned earlier for mild collinearity and
severe collinearity. Since the collinearity will always be worse with the intercept in the model, it is advisable to
have more relaxed criteria for mild and severe collinearity, say 500 and 5000, respectively.
305-47
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
This report displays how the eigenvectors associated with each eigenvalue are related to the independent
variables.
No.
The number of the eigenvalue.
Eigenvalue
The eigenvalues of the correlation matrix. The sum of the eigenvalues is equal to the number of independent
variables. Eigenvalues near zero mean that there is collinearity in your data.
Values
The rest of this report gives a breakdown of what percentage each eigenvector is of the total variation for the
regression coefficient. Hence, the percentages sum to 100 down a column.
A small eigenvalue (large condition number) along with a subset of two or more independent variables having
high variance percentages indicates a dependency involving the independent variables in that subset. This
dependency has damaged or contaminated the precision of the regression coefficients estimated in the subset. Two
or more percentages of at least 50% for an eigenvector or eigenvalue suggest a problem. For certain, when there
are two or more variance percentages greater than 90%, there is definitely a collinearity problem.
Confidence intervals for the mean response of Y given specific levels for the IV’s are provided here. It is
important to note that violations of any regression assumptions will invalidate these interval estimates.
Actual
This is the actual value of Y.
305-48
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Predicted
The predicted value of Y. It is predicted using the values of the IV’s for this row. If the input data had all IV
values but no value for Y, the predicted value is still provided.
Standard Error of Predicted
This is the standard error of the mean response for the specified values of the IV’s. Note that this value is not
constant for all IV’s values. In fact, it is a minimum at the average value of each IV.
Lower 95% C.L. of Mean
This is the lower limit of a 95% confidence interval estimate of the mean of Y for this observation.
Upper 95% C.L. of Mean
This is the upper limit of a 95% confidence interval estimate of the mean of Y for this observation. Note that you
set the alpha level.
A prediction interval for the individual response of Y given specific values of the IV’s is provided here for each
row.
Actual
This is the actual value of Y.
Predicted
The predicted value of Y. It is predicted using the levels of the IV’s for this row. If the input data had all values of
the IV’s but no value for Y, a predicted value is provided.
Standard Error of Predicted
This is the standard deviation of the mean response for the specified levels of the IV’s. Note that this value is not
constant for all IV’s. In fact, it is a minimum at the average value of each IV.
Lower 95% Pred. Limit of Individual
This is the lower limit of a 95% prediction interval of the individual value of Y for the values of the IV’s for this
observation.
Upper 95% Pred. Limit of Individual
This is the upper limit of a 95% prediction interval of the individual value of Y for the values of the IV’s for this
observation. Note that you set the alpha level.
305-49
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Residual Report
Absolute Sqrt(MSE)
Actual Predicted Percent Without
Row IQ IQ Residual Error This Row
1 106.000 110.581 -4.581 4.32 11.08
2 92.000 98.248 -6.248 6.79 10.90
3 102.000 97.616 4.384 4.30 11.14
4 121.000 118.340 2.660 2.20 11.18
5 102.000 96.006 5.994 5.88 10.98
6 105.000 102.233 2.767 2.64 11.24
. . . . . .
. . . . . .
. . . . . .
This report presents various statistics known as regression diagnostics. They let you conduct an influence analysis
of the observations. The interpretation of these values is explained in modern regression books. Belsley, Kuh, and
Welsch (1980) devote an entire book to the study of regression diagnostics.
These statistics flag observations that exert three types of influence on the regression.
1. Outliers in the residual space. The Studentized Residual, the RStudent, and the CovRatio will flag
observations that are influential because of large residuals.
2. Outliers in the X-space. The Hat Diagonal flags observations that are influential because they are outliers
in the X-space.
305-50
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
3. Parameter estimates and fit. The Dffits shows the influence on fitted values. It also measures the impact
on the regression coefficients. Cook’s D measures the overall impact that a single observation has on the
regression coefficient estimates.
Standardized Residual
The variances of the observed residuals are not equal, making comparisons among the residuals difficult. One
solution is to standardize the residuals by dividing by their standard deviations. This will give a set of
standardized residuals with constant variance. The formula for this residual is
ej
rj =
MSE(1 − h jj )
RStudent
Rstudent is similar to the standardized residual. The difference is the MSE(j) is used rather than MSE in the
denominator. The quantity MSE(j) is calculated using the same formula as MSE, except that observation j is
omitted. The hope is that be excluding this observation, a better estimate of σ2 will be obtained. Some statisticians
refer to these as the studentized deleted residuals.
If the regression assumptions of normality are valid, a single value of the RStudent has a t distribution with n-p-1
degrees of freedom.
ej
tj =
MSE( j ) 1 − h jj ( )
Hat Diagonal
The hat diagonal, h jj , captures an observation’s remoteness in the X-space. Some authors refer to the hat diagonal
as a measure of leverage in the X-space. Hat diagonals greater than two times the number of coefficients in the
model divided by the number of observations are said to have high leverage (i.e., hii > 2p/n).
Cook’s D
Cook’s distance (Cook’s D) attempts to measure the influence each observation on all N fitted values. The
approximate formula for Cook’s D is
∑ w [ y ]
N
− y j ( i)
2
j j
Dj = i =1
ps 2
The y j ( i) are found by removing observation i before the calculations. Rather than go to all the time of
recalculating the regression coefficients N times, we use the following approximation
2
w j e j h jj
Dj =
ps 2 (1 − h jj )
2
305-51
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
DFFITS
DFFITS is the standardized difference between the predicted value with and without that observation. The
formula for DFFITS is
r 2j h jj
=
Dj
p 1 − h jj
The values of y j ( j ) and s 2 ( j ) are found by removing observation j before the doing the calculations. It
represents the number of estimated standard errors that the fitted value changes if the jth observation is omitted
from the data set. If |DFFITS| > 1, the observation should be considered to be influential with regards to
prediction.
CovRatio
This diagnostic flags observations that have a major impact on the generalized variance of the regression
coefficients. A value exceeding 1.0 implies that the ith observation provides an improvement, i.e., a reduction in
the generalized variance of the coefficients. A value of CovRatio less than 1.0 flags an observation that increases
the estimated generalized variance. This is not a favorable condition.
The general formula for the CovRatio is
CovRatio j =
[
det s( j )2 ( X( j )' WX( j ))
−1
]
[
det s 2 ( X' WX)
−1
]
p
1 s( j )2
=
1 − h jj s 2
Belsley, Kuh, and Welsch (1980) give the following guidelines for the CovRatio.
If CovRatio > 1 + 3p / N then omitting this observation significantly damages the precision of at least some of the
regression estimates.
If CovRatio < 1 - 3p / N then omitting this observation significantly improves the precision of at least some of the
regression estimates.
DFBETAS Section
Row Test1 Test2 Test3 Test4 Test5
1 0.21597 0.31277 -0.03897 -0.25558 0.17228
2 -0.11231 0.01903 -0.08301 0.08707 0.00449
3 0.18220 0.23698 0.02914 -0.20754 0.06737
4 -0.17921 -0.21571 0.21569 0.23930 0.19635
5 0.39323 0.34425 0.01076 -0.36379 0.12404
6 0.09693 0.08680 -0.01100 -0.08424 -0.05343
. . . . . .
. . . . . .
. . . . . .
DFBETAS
The DFBETAS is an influence diagnostic which gives the number of standard errors that an estimated regression
coefficient changes if the jth observation is deleted. If one has N observations and p independent variables, there
are Np of these diagnostics. Sometimes, Cook’s D may not show any overall influence on the regression
coefficients, but this diagnostic gives the analyst more insight into individual coefficients. The criteria of
influence for this diagnostic are varied, but Belsley, Kuh, and Welsch (1980) recommend a cutoff of 2 / N .
Other guidelines are ±1 or ±2. The formula for DFBETAS is
305-52
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
b k − b k,-j
dfbetas k =
MSE j c kk
where ckk is the kth row and kth column element of the inverse matrix (X'X)-1.
Point Cloud
A point cloud, basically in the shape of a rectangle or a horizontal band, would indicate no relationship between
the residuals and the variable plotted against them. This is the preferred condition.
Wedge
An increasing or decreasing wedge would be evidence that there is increasing or decreasing (nonconstant)
variation. A transformation of Y may correct the problem, or weighted least squares may be needed.
Bowtie
This is similar to the wedge above in that the residual plot shows a decreasing wedge in one direction while
simultaneously having an increasing wedge in the other direction. A transformation of Y may correct the problem,
or weighted least squares may be needed.
Sloping Band
This kind of residual plot suggests adding a linear version of the independent variable to the model.
Curved Band
This kind of residual plot may be indicative of a nonlinear relationship between Y and the independent variables
that was not accounted for. The solution might be to use a transformation on Y to create a linear relationship with
the X’s. Another possibility might be to add quadratic or cubic terms of a particular independent variable.
305-53
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Histogram
The purpose of the histogram and density trace of the residuals is to evaluate whether they are normally
distributed. A dot plot is also given that highlights the distribution of points in each bin of the histogram. Unless
you have a large sample size, it is best not to rely on the histogram for visually evaluating normality of the
residuals. The better choice would be the normal probability plot.
305-54
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
305-55
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Sequence Plot
Sequence plots may be useful in finding variables that are not accounted for by the regression equation. They are
especially useful if the data were taken over time.
305-56
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
305-57
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
RStudent vs Predictor(s)
This is a scatter plot of the RStudent residuals versus each independent variable. The preferred pattern is a
rectangular shape or point cloud. These plots are very helpful in visually identifying any outliers and nonlinear
patterns.
305-58
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Example 2 – Bootstrapping
This section presents an example of how to generate bootstrap confidence intervals with a multiple regression
analysis. The tutorial will use the data are in the IQ dataset. This example will run a regression of IQ on Test1,
Test2, and Test4.
You may follow along here by making the appropriate entries or load the completed template Example 2 by
clicking on Open Example Template from the File menu of the Multiple Regression window.
This report gives the confidence limits calculated under the assumption of normality. We have displayed it so that
we can compare these to the bootstrap confidence intervals.
305-59
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
This report provides bootstrap intervals of the regression coefficients and predicted values for rows 16 and 17
which did not have an IQ (Y) value. Details of the bootstrap method were presented earlier in this chapter.
It is interesting to compare these confidence intervals with those provided in the Regression Coefficient report.
The most striking difference is that the lower limit of the 95% bootstrap confidence interval for B(Test4) is now
negative. When the lower limit is negative and the upper limit is positive, we know that a hypothesis test would
not find the parameter significantly different from zero. Thus, while the regular confidence interval of B(Test4)
indicates statistical significance (since both limits are positive), the bootstrap confidence interval does not.
Note that since these results are based on 3000 random bootstrap samples, they will differ slightly from the results
you obtain when you run this report.
305-60
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
Original Value
This is the parameter estimate obtained from the complete sample without bootstrapping.
Bootstrap Mean
This is the average of the parameter estimates of the bootstrap samples.
Bias (BM - OV)
This is an estimate of the bias in the original estimate. It is computed by subtracting the original value from the
bootstrap mean.
Bias Corrected
This is an estimated of the parameter that has been corrected for its bias. The correction is made by subtracting the
estimated bias from the original parameter estimate.
Standard Error
This is the bootstrap method’s estimate of the standard error of the parameter estimate. It is simply the standard
deviation of the parameter estimate computed from the bootstrap estimates.
Conf. Level
This is the confidence coefficient of the bootstrap confidence interval given to the right.
Bootstrap Confidence Limits - Lower and Upper
These are the limits of the bootstrap confidence interval with the confidence coefficient given to the left. These
limits are computed using the confidence interval method (percentile or reflection) designated on the Bootstrap
panel.
Note that to be accurate, these intervals must be based on over a thousand bootstrap samples and the original
sample must be representative of the population.
Bootstrap Histograms
305-61
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
305-62
© NCSS, LLC. All Rights Reserved.
NCSS Statistical Software NCSS.com
Multiple Regression
The F-Value for the Age*State interaction term is 1.869. This matches the result that was obtained by hand
calculations in the General Linear Model example. Since the probability level of 0.1761 is not significant, we
cannot reject the assumption that the three slopes are equal.
305-63
© NCSS, LLC. All Rights Reserved.