0% found this document useful (0 votes)
6 views

Interpreting Multiple Regression

The document provides a detailed guide on interpreting multiple regression output in SPSS, focusing on the relationship between burnout, compassion fatigue, compassion satisfaction, and job satisfaction. It includes instructions for conducting multiple regression analysis, descriptive statistics, correlations, and the significance of various models. The findings indicate that burnout and compassion fatigue significantly predict job satisfaction, while compassion satisfaction does not improve predictive ability.

Uploaded by

OLUWOLE
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Interpreting Multiple Regression

The document provides a detailed guide on interpreting multiple regression output in SPSS, focusing on the relationship between burnout, compassion fatigue, compassion satisfaction, and job satisfaction. It includes instructions for conducting multiple regression analysis, descriptive statistics, correlations, and the significance of various models. The findings indicate that burnout and compassion fatigue significantly predict job satisfaction, while compassion satisfaction does not improve predictive ability.

Uploaded by

OLUWOLE
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Multiple Regression 1

Week 14
Interpreting Multiple Regression Output in SPSS
The best predictor of future behavior is past behavior.

Computing Multiple Regression with SPSS


A. Multiple Regression
1. Other things can affect job satisfaction besides burnout
a. Using Job Satisfaction.sav explore how adding additional IVs changes your predictive ability
2. Multiple regression: adding more variables improves the predictive validity of your equation.
Predictor variables: level of burnout (BO); compassion fatigue (CF); compassion satisfaction (CS)
Outcome variable: Job satisfaction (JS)

Multiple Regression in SPSS


Dataset: Job Satisfaction.sav
Analyze ! Regression ! Linear…
Move the variable Job Satisfaction into the Dependent: window;
Move the variable Burnout into the Independent(s): window; Next
Move the variable Compassion Fatigue into the (now blank) Independent(s): window; Next
Move the variable Compassion Satisfaction into the Independent(s): window;
Method should remain set to Enter for each block
Statistics – check Confidence intervals; Descriptives; R squared change; Collinearity
diagnostics; Durbin-Watson; and Casewise diagnostics; Continue
Plots – Move ZPRED to X: & Move ZRESID to Y: & Click on Histogram and Normal
probability plot; Continue; OK

Descriptive Statistics
Mean Std. Deviation N
Job Satisfaction 132.17 29.319 200
Burnout 48.9096 9.02511 200
Compassion Fatigue 48.9140 8.46020 200
Compassion Satisfaction 51.3584 8.83014 200
The Descriptive Statistics tell us means and SD, so we know what values to expect. Will be useful later when writing up
the descriptions of each variable.

© 2019 Todd Daniel, Ph.D., Research by Design, LLC


Multiple Regression 2

Correlations
Compassion Compassion
Job Satisfaction Burnout Fatigue Satisfaction
Pearson Correlation Job Satisfaction 1.000 -.650 -.282 .484
Burnout -.650 1.000 .604 -.755
Compassion Fatigue -.282 .604 1.000 -.307
Compassion Satisfaction .484 -.755 -.307 1.000
Sig. (1-tailed) Job Satisfaction . .000 .000 .000
Burnout .000 . .000 .000
Compassion Fatigue .000 .000 . .000
Compassion Satisfaction .000 .000 .000 .
N Job Satisfaction 200 200 200 200
Burnout 200 200 200 200
Compassion Fatigue 200 200 200 200
Compassion Satisfaction 200 200 200 200
The Correlations show the relationships among the variables. Can be used to create a correlation table.

Variables Entered/Removed
Model Variables Entered Variables Removed Method
1 Burnout . Enter
2 Compassion Fatigue . Enter
3 Compassion Satisfaction . Enter
a. Dependent Variable: Job Satisfaction
b. All requested variables entered.
The Variables Entered/Removed shows the order of variable entry in the three blocks; mostly for reference.

Model Summary
Std. Error of Change Statistics
R Adjusted R the R Square F Sig. F Durbin-
Model R Square Square Estimate Change Change df1 df2 Change Watson
a
1 .650 .423 .420 22.334 .423 144.940 1 198 .000
b
2 .665 .442 .436 22.014 .019 6.802 1 197 .010
c
3 .667 .445 .436 22.017 .003 .932 1 196 .335 2.103
a. Predictors: (Constant), Burnout
b. Predictors: (Constant), Burnout, Compassion Fatigue
c. Predictors: (Constant), Burnout, Compassion Fatigue, Compassion Satisfaction
d. Dependent Variable: Job Satisfaction
The Model Summary tells us whether the addition of each subsequent variable significantly improved the model. Model
1 (Burnout) is significantly better than just the constant. Model 2 (Burnout plus Compassion Fatigue) is
significantly better than Burnout alone. Model 3 (adding Compassion Satisfaction) was not a significant
improvement over the previous model (Model 2).
The r2 (and the adjusted r2) in Model Summary tells us how much of the variance is explained by each model (Model 1 =
42.3%; Model 2 = 44.2%; Model 3 = 44.5%).
The .003 r2 change for CS means that adding CS explained only 0.3% new variance
The Durbin-Watson statistic should be close to 2; values less than 1 or greater than 3 indicates a violation of the
assumption of independence.

© 2019 Todd Daniel, Ph.D., Research by Design, LLC


Multiple Regression 3

ANOVA
Model Sum of Squares df Mean Square F Sig.
b
1 Regression 72296.564 1 72296.564 144.940 .000
Residual 98762.991 198 498.803
Total 171059.555 199
c
2 Regression 75593.004 2 37796.502 77.995 .000
Residual 95466.551 197 484.602
Total 171059.555 199
d
3 Regression 76045.011 3 25348.337 52.290 .000
Residual 95014.544 196 484.768
Total 171059.555 199
a. Dependent Variable: Job Satisfaction
b. Predictors: (Constant), Burnout
c. Predictors: (Constant), Burnout, Compassion Fatigue
d. Predictors: (Constant), Burnout, Compassion Fatigue, Compassion Satisfaction
The ANOVA tells if the model fits the data well overall. The significant ANOVA for all three models shows that each
model predicts better than simply using the mean. Model 3 fits, but as we will see next in Coefficients, adding CS
does not improve the predictive ability of model 3, therefore, it is less parsimonious than model 2.

Coefficients
Unstandardized Standardized 95.0% Confidence Collinearity
Coefficients Coefficients Interval for B Statistics
Lower Upper
Model B Std. Error Beta t Sig. Bound Bound Tolerance VIF
1 (Constant) 235.459 8.724 26.990 .000 218.255 252.663
Burnout -2.112 .175 -.650 -12.039 .000 -2.458 -1.766 1.000 1.000
2 (Constant) 222.645 9.904 22.481 .000 203.114 242.175
Burnout -2.453 .217 -.755 -11.312 .000 -2.881 -2.026 .636 1.574
Compassion .603 .231 .174 2.608 .010 .147 1.060 .636 1.574
Fatigue
3 (Constant) 244.974 25.156 9.738 .000 195.362 294.585
Burnout -2.691 .328 -.828 -8.201 .000 -3.338 -2.044 .278 3.601
Compassion .670 .241 .193 2.775 .006 .194 1.146 .584 1.711
Fatigue
Compassion -.271 .281 -.082 -.966 .335 -.825 .283 .396 2.527
Satisfaction
a. Dependent Variable: Job Satisfaction
We can get a and b values for each variable in Coefficients (green box).
The significant t values for Constant, BO, and CF mean that they contribute to the explanatory power of the model.
The non-significant t value for CS means that it does not contribute to the model and should be removed
The collinearity statistics meet the assumption of collinearity: VIF < 10 and Tolerance > 0.1

Excluded Variables
Collinearity Statistics
Partial Minimum
Model Beta In t Sig. Correlation Tolerance VIF Tolerance
b
1 Compassion Fatigue .174 2.608 .010 .183 .636 1.574 .636
b
Compassion Satisfaction -.015 -.183 .855 -.013 .430 2.323 .430
c
2 Compassion Satisfaction -.082 -.966 .335 -.069 .396 2.527 .278
a. Dependent Variable: Job Satisfaction
b. Predictors in the Model: (Constant), Burnout
c. Predictors in the Model: (Constant), Burnout, Compassion Fatigue
The Excluded Variables are not needed now

© 2019 Todd Daniel, Ph.D., Research by Design, LLC


Multiple Regression 4

Collinearity Diagnostics
Variance Proportions
Compassion Compassion
Model Dimension Eigenvalue Condition Index (Constant) Burnout Fatigue Satisfaction
1 1 1.983 1.000 .01 .01
2 .017 10.957 .99 .99
2 1 2.971 1.000 .00 .00 .00
2 .017 13.256 .92 .39 .06
3 .012 15.747 .08 .61 .94
3 1 3.926 1.000 .00 .00 .00 .00
2 .060 8.097 .00 .05 .03 .11
3 .012 17.981 .04 .17 .89 .00
4 .002 40.572 .96 .78 .08 .89
a. Dependent Variable: Job Satisfaction
The Collinearity Diagnostics are not needed now

EXAMINING THE RESIDUALS (HOW GOOD IS MY MODEL?)


A. We use a straight line on a scatterplot as a model of how the data may look
1. The straight line model may or may not “fit” well
2. We should evaluate how well our model fits the actual data by examining the residuals
B. Residuals refer to the unexplained variance (a.k.a. coefficient of alienation)
1. Example: Causes of death. We want to know why people die. We want to help people avoid causes
of death so they can live longer healthier lives. What are the leading causes of death in the U.S.?
Make a list. So we have explained the causes of 92% of all death; that leaves 8% unexplained.

Residuals Statistics
Minimum Maximum Mean Std. Deviation N
Predicted Value 81.63 170.74 132.17 19.548 200
Residual -55.826 53.773 .000 21.851 200
Std. Predicted Value -2.585 1.973 .000 1.000 200
Std. Residual -2.536 2.442 .000 .992 200
a. Dependent Variable: Job Satisfaction
Residual Statistics: The minimum and maximum values in Std. Residual should not exceed 3.29 & -3.29 respectively. If
they do, you have outliers and need to fix the data and rerun the analysis.

C. We want to be sure that we don’t have a problem with co-variance or collinearity (too high r). The best
way to see this is to examine the residuals plots (i.e. P-P Plot and Scatterplot)
1. P-P Plot should have standardized residuals along a straight line
2. Scatterplot of residuals should be roughly elliptical/round
a. This means that the unexplained variability (coefficient of alienation) does not have any
particular trend.
b. If it had a trend, that might mean some other variable (that we forgot about or neglected) was
influencing our model.

© 2019 Todd Daniel, Ph.D., Research by Design, LLC


Multiple Regression 5

The residual values should line up along the line. This means The residual values are plotted against their predicted values.
that the residuals are normally distributed. Good. The flat regression line indicates that they are random and
supports homoscedasticity. Good.

Notes on Writing Up Regression


APA style guide gives us very little guidance about reporting regression. Table 1 (below) illustrates how to
report descriptive statistics, correlations, and Cronbach’s alpha reliability values for each of the variables of
interest. It packs a lot of information into a single table. Do not be afraid to combine findings from multiple
sources into a single table. Regardless of whether you make a Table 1, you should report a regression table (see
Table 2). Your table should include B (unstandardized slope), standard errors, β (beta: standardized slope), and
the t test with its significance level. Degrees of freedom for the t-test is not reported in SPSS; calculate df as
(N – k – 1), where K is number of predictor variables.

APA Write-up for Multiple Regression


A multiple regression was performed to evaluate the predictive ability of burnout, compassion
fatigue, and compassion satisfaction on job satisfaction. The 30-item Professional Quality of Life
(ProQOL 5; Stamm, 2010) was used to assess the variables of burnout, compassion fatigue, and
compassion satisfaction. Job satisfaction was measured using the Job Satisfaction Survey (JSS;
Spector, 1997). Data were screened for accuracy and then for missing data; the dataset was complete.
Exploratory data analysis using a Kolmogorov-Smirnov test showed that the outcome variable
of job satisfaction was normally distributed, D(200) = .046. p = .20. Residuals met the assumption of
independence (Durbin-Watson statistic = 2.103). Linearity and homoscedasticity were assessed by a
plot of standardized residuals against the predicted values. Collinearity statistics indicated that
multicollinearity was not a problem (burnout, tolerance = . 28, VIF = 3.60; compassion fatigue,
tolerance = .58, VIF = 1.71; compassion satisfaction, tolerance = .40, VIF = 2.53), and no bivariate
outliers were detected (Std. Residual Min. = -2.59, Std. Residual Max. = 2.30).
© 2019 Todd Daniel, Ph.D., Research by Design, LLC
Multiple Regression 6

Descriptive statistics, correlations, and Cronbach’s alpha reliability values for the four variables
of interest are contained in Table 1. The internal consistency alphas for each of the variables of interest
ranged from α = .848 (compassion satisfaction) to α = .944 (burnout). According to Nunnally and
Bernstein (1994), alpha values exceeding α = .70 indicate that the instruments used in the study were
adequately reliable.
Burnout and compassion fatigue statistically significantly predicted job satisfaction scores, F(2,
197) = 77.99, p < .001, R2 = .44, but adding compassion satisfaction did not significantly improve the
model, b = -.27, t(266) = -.97, p = .34. Table 2 contains the multiple regression results for the four
variables of interest. These results indicate that both burnout and compassion fatigue are significant
predictors of job satisfaction, but compassion satisfaction is not.

Table 1
Descriptive Statistics for Study Variables, n = 200
Variable Mean SD 1 2 3 4
1. Job Satisfaction 132.17 29.32 (.942)
2. Burnout 48.91 9.03 -.650** (.944)
3. Compassion Fatigue 48.91 8.46 -.282** .604** (.927)
4. Compassion Satisfaction 51.36 8.83 .484** -.755** -.307** (.848)
Note. **Correlation is statistically significant at the .01 level. Job satisfaction scores could range from 36 to 216. Burnout,
compassion fatigue, and compassion satisfaction scores could range from 10 to 50. Items in parentheses on the diagonal
represent Cronbach’s alpha reliability values.

Table 2
Multiple Regression Results for Study Variables, n = 200
Variable B SEB Beta t Sig.
Step 1
Burnout -2.11 0.18 -0.65 -12.04 <.001
Step 2
Burnout -2.45 0.22 -0.76 -11.31 <.001
Compassion Fatigue 0.60 0.23 0.17 2.61 .010
Step 3
Burnout -2.69 0.33 -0.83 -8.20 <.001
Compassion Fatigue 0.67 0.24 0.19 2.78 .006
Compassion Satisfaction -0.27 0.28 -0.08 -0.97 .335
Note. R2 for Step 1 = .423, for Step 2 = .442, for Step 3 = .445.

© 2019 Todd Daniel, Ph.D., Research by Design, LLC


Multiple Regression 7

A Note About Regression Entry Methods


SPSS allows you to choose how the regression equation will be constructed: the Method. There are five
methods for entering independent variables into the regression equation:
Enter: The forced entry option. SPSS enters all specified variables at one time regardless of their significance
levels. (Generally considered the best approach with the most control for the researcher)
Forward: Enters variables one at a time, based on a preset significance value (.05) to enter.
Backward: Enters all variables at one time and then removes variables one at a time based on a preset
significance value (commonly .05) to remove.
Stepwise: Combines both forward and backward procedures. This used to be the most frequently used of
the regression methods, but has major flaws. (Generally considered the worst approach.) Stepwise should
only be used as an exploratory method; when IVs are highly correlated, entering a new variable might
cause another previously significant variable to become non-significant.
Remove: The forced removal option. SPSS enters all specified variables at one time regardless of their
significance levels. The researcher may then specify one or more variables to remove. SPSS removes the
specified variables and runs the analysis again.

References
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence
Earlbaum Associates.
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.
Spector, P. E. (1997). Job satisfaction: Application, assessment, causes, and consequences. Thousand
Oaks, CA: Sage.
Stamm, B. H. (2010). The concise ProQOL manual (2nd ed.). Pocatello, ID: ProQOL.org.
All IBM® SPSS® Statistics software screen reprints courtesy of International Business Machines Corporation, © International Business Machines Corporation.

© 2019 Todd Daniel, Ph.D., Research by Design, LLC

You might also like