100% found this document useful (18 votes)
608 views14 pages

Using Multivariate Statistics 7th Edition Instant Download

The document is a reference for the book 'Using Multivariate Statistics - 7th Edition' by Barbara G. Tabachnick and Linda S. Fidell, published by Pearson Education. It includes acknowledgments, copyright information, and a detailed table of contents outlining various statistical techniques and concepts related to multivariate analysis. The book serves as a comprehensive guide for understanding and applying multivariate statistical methods in research.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (18 votes)
608 views14 pages

Using Multivariate Statistics 7th Edition Instant Download

The document is a reference for the book 'Using Multivariate Statistics - 7th Edition' by Barbara G. Tabachnick and Linda S. Fidell, published by Pearson Education. It includes acknowledgments, copyright information, and a detailed table of contents outlining various statistical techniques and concepts related to multivariate analysis. The book serves as a comprehensive guide for understanding and applying multivariate statistical methods in research.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Using Multivariate Statistics - 7th Edition

Visit the link below to download the full version of this book:

https://medipdf.com/product/using-multivariate-statistics-7th-edition/

Click Download Now


Portfolio Manager: Tanimaa Mehra Compositor: Integra Software
Content Producer: Kani Kapoor Services Pvt. Ltd.
Portfolio Manager Assistant: Anna Austin Printer/Binder: LSC Communications, Inc.
Product Marketer: Jessicn Quazza Cover Printer: Phoenix Co/or/Hagerstown
Art/Designer: Integra Software Services Pvt. Ltd. Cover Design: Lumina Dntamntics, Inc.
Fu ll-Service Project Manager: Integra Software Cover Art: Slnttterstock
Services Pvt. Ltd.

Acknowledgments of third party content appear on pages within the text, which constitutes an extension of this
copyright page.

Copyright © 2019, 2013, 2007 by Pearson Education, Inc. or its affiliates. All Rights Reserved. Printed in the
United States of America. This publication is protected by copyright, and permission should be obtained from
the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or
by any means, electronic, mechan ical, photocopying, recording, or otherwise. For infom1ation rega rding permis-
sions, request forms and the appropriate contacts within the Pearson Education Global R;ghts & Permissions
d epartmen t, please visit www.pearsoned.com/permissions/.

PEARSON and ALWAYS LEARNING are exclusive trad emarks owned by Pearson Education, Inc. or its
a ffiliates, in the U.S., and/ or other countries.

Unless otherwise indicated herein, any third-party trademarks that may appear in this work are the property of
their respective owners and any references to third-party trademarks, logos or other trade dress are for demon-
strative or descriptive purposes only. Sucll references are not intended to im ply any sponsorship, endorsement,
a uthorization, or promotion of Pearson's products by the owners of such mar ks, or any relationship between the
owner and Pearson Education, Inc. or its affiliates, a uthors, licensees or distributors.

Many of the d esigna tions by manufacturers and seller to distinguish their products are clain1ed as trademar ks.
Where those designations appea r in this book, and the publisher was aware of a trademark claim, the d esigna-
tions have been printed in initial caps or all caps.

Library of Congress Cataloging-in-Publication Data


Nam es: Tabachnick, Barbara G., author. I Fidell, Linda S., author.
Title: Using multivaria te statistics/Ba rbara G. Tabacllnick, Ca lifornia State University, Northridge,
Linda S. Fidell, California Sta te University, Northridge.
Description: Seventh edition. I Boston: Pearson, [2019) 1 Chapter 14,
by Jodie B. Ullm an .
Id entifiers: LCCN 2017040173 1 ISBN 9780134790541 I ISBN 0134790545
Subjects: LCSH: Multivaria te analysis. I Statistics.
C lassification: LCC QA278 .T3 2019 I DOC 519.5/35--dc23
LC record available a t https:/ /lccn .loc.gov / 2017040173

1 18

@Pearson Books a Ia Carte


ISBN-10: 0-13-479054-5
ISBN-13: 978-0-13-479054-1
Contents
P reface xiv 2.1.3 Prediction of Group Membership 20
2.1.3.1 One-Way Discriminant Analysis 20
1 Introduction 1 2.1.3.2 Sequential One-Way Discriminant
Analysis 20
1.1 Multivariate Statistics: Why? 1 2.1.3.3 Multi way Frequency Analysis
1.1.1 The Domain of Multivariate Statistics: (Logit) 21
Numbers of IVs and DVs 2 2.1.3.4 Logistic Regression 21
1.1.2 Experimental and Nonexperimental 2.1.3.5 Sequential Logisbc Regression 21
Resea~ 2 2.1.3.6 Factorial Discriminant Analysis 21
1.1.3 Computers and Multivariate Statistics 3 2.1.3.7 Sequential Factorial Discriminant
1.1.4 Garbage In, Roses Out? 4 Analysis 22

1.2 Som e Useful Definitions 5 2 .1.4 Structu re 22


2.1.4.1 Principal Components 22
1.2.1 Continuous, Discrete, a nd Dichotom o us
~Ia 5 2.1 .4.2 Factor Analysis 22
2.1.4.3 Stmctural Equation Modeling 22
1.2.2 Samples a nd Populations 6
2.1.5 Time Course of Events 22
1.2.3 Descriptive a nd Infe rential Statis tics 7
2.1.5.1 Survival/Failure Analysis 23
1.2.4 O rlh ogona li ty : Standa rd and Sequential
2.1.5.2 Time-Series Ana lysis 23
A na lyses 7
2.2 Som e Furthe r Com pa risons 23
1.3 Linear Com b inations o f Variables 9
2.3 A Decision Tree 24
1.4 N u m ber a nd Na ture of Variables to Inclu de 10
2 .4 Technique Chapters 27
1.5 Statistical Power 10
2.5 Preliminary Check of the Data 28
1.6 Data Appropriate for Multivariate Statistics 11
1.6.1 The Data Matrix 11
1.6.2 The Correlation Matrix 12
3 Review of Univariate and
1.6.3 The Variance-Covariance Matrix 12
Bivariate Statistics 29
1.6.4 The Sum-of-Squares and Cross-Products 3.1 Hypothesis Testing 29
Matrix 13 3.1.1 One-Sample z Test as Prototype 30
1.6.5 Residuals 14 3.1.2 Power 32
1.7 Organization of the Book 14 3.1.3 Extensions ofthe Model 32
3.1.4 Controversy Surrounding Significance
2 A Guide to Statistical Techniques: Testing 33
Using the Book 15 3.2 Analysis of Variance 33
2.1 Research Questions and Associated Techniques 15 3.2.1 One-Way Between-Subjects ANOVA 34
2.1.1 Degree of Relationship Among Variables 15 3.2.2 Factorial Between-Subjects ANOVA 36
2.1.1.1 Bivariate r 16 3.2.3 Within -Subjects ANOVA 38
2.1.1.2 Multiple R 16 3.2 .4 Mixed Between-Within-Subjects ANOVA 40
2.1.1.3 Sequential R 16 3.2 .5 Design Complexity 41
2.1.1.4 Canonical R 16 3.2.5.1 Nesting 41
2.1.1.5 Multiway Frequency Analysis 17 3.2.5.2 Latin-Square Designs 42
2.1.1.6 Multilevel Modeling 17 3.2.5.3 Unequal 11 and Nonorthogonality 42
2.1.2 Sig n ifica nce o f C ro up Diffe rences 17 3.2.5.4 Fixed and Random Eff<>cts 43
2.1.2.t One-Way ANOVA and I Test 17 3.2 .6 Specific Com pari sons 43
2.1.2.2 One-Way ANCOVA 17 3.2.6.1 Weighting Coefficients for
2.1.2.3 Factorial ANOVA 18 Comparisons 43
2.1.2.4 Factorial ANCOVA 18 3.2.6.2 Orthogonality of Weighting
2.1.2.5 lfotelling's T 2 18 Coefficients 44
2.1.2.6 One-Way MANOVA 18 3.2.6.3 Obtained F for Comparisons 44
2.1.2.7 One-Way MANCOVA 19 3.2.6.4 Critical F for PLmned Comparisons 45
2.1.2.8 Factorial MANOVA 19 3.2.6.5 Critical F for Post Hoc Comparisons 45
2.1.2.9 Factorial MANCOVA 19 3.3 Parameter Estimation 46
2.1.2.10 Profile Analysis of Repeated Measures 19 3.4 Effect Size 47
iii
iv Contents

3.5 Bivariate Statistics: Correlation and Regression 48


3.5.1 Correlation 48
5 Multiple Regression 99
3.5.2 Regression 49 5.1 General Purpose and Description 99
3.6 Chi-Square Analysis 50 5.2 Kinds of Research Questions 101
5.2.1 Degree of Relationship 101
4 Cleaning Up Your Act: Screening 5.2.2 Importance offVs 102
Data Prior to Analysis 52 5.2.3 Adding IVs 102
4.1 Important Issues in Data Screening 53 5.2.4 Changing lVs 102
4.1.1 Accuracy o f Data File 53 5.2.5 Contingencies Amo ng IVs 102
4.1.2 Honest Correlations 53 5.2.6 Comparing Sets of IVs 102
4.1.2.1 Inflated Correlation 53 5.2.7 Predicting DV Scores
4.1.2.2 Deflated Correlation 53 for Members of a New Sample 103
4.1.3 Missing Data 54 5.2.8 Parameter Estimates 103
4.1.3.1 Deleting Cases or Variables 57 5.3 Limitations to Regression Analyses 103
4.1.3.2 Estimating Missing Data 57 5.3.1 Theoretical Issues 103
4.1.3.3 Using a Missing Data Correlation 5.3.2 Practical Issues 104
Matrix 61 5 .3 .2 .1 Ratio of Cases to IVs lOS
4.1.3.4 Treating Missing Data as Data 61 5 .3 .2 .2 Absence of Outliers Among
4.1.3.5 Repeating Analyses with and without the IVs and on the DV 105
Missing Data 61 5.3.2.3 Absence of Multicollinearity and
4.1.3.6 Choosing Among Methods for Singula rity 106
Dealing with Missing Datn 62 5.3.2.4 Normality, Linearity, and
4.1.4 O utliers 62 Homoscedasticity of Residuals 106
4.1.4.1 Detecting Univariate and 5.3.2.5 Independence of Errors 108
Multivariate Outliers 63 5.3.2.6 Absence of Outliers in the Solution 109
4.1.4.2 Describing Outliers 66 5.4 Fundamental Equations for Multiple
4.1.4.3 Reducing the Influence 109
Regression
of Outliers 66
5.4.1 General Linear Equations 110
4.1.4.4 Outliers in a Solution 67
5.4.2 Matrix Equations 111
4.1.5 Normality, Linearity, and
Homoscedasticity 67 5.4.3 Computer Analyses of Small-Sample
4.1.5.1 Normality
Example 113
68
4.1 .5.2 Linearity 72 5.5 Major Types of Multiple Regression 115
4.1.5.3 Homoscedasticity, Homogeneity 5.5.1 Standard Mu ltiple Regression 115
of Variance, and Homogeneity of 5.5.2 Sequential Mu ltiple Regression 116
Variance-Covariance Mntriccs 73
5.5.3 Statistica l (Stepwise) Regression 117
4.1.6 Common Da ta Transformations 75
5.5.4 Choosing Among Regression
4.1.7 Multicollinearity and Singularity 76 Strategies 121
4.1.8 A Checklist and Some Practical 5.6 Some Important Issues 121
Recommendations 79
5.6.1 Importance of IVs 121
4.2 Complete Examples of Data Screening 79 5.6. 1.1 Standard Multiple Regression 122
4.2.1 Screening Ungrouped Data 80 5.6. 1.2 Sequential or Statistical Regression 123
4.2.1.1 Accuracy of Input, Misi.ing Data, 5.6 .1.3 Commonality Analysis 123
Distribu tions, and Univariate Outliers 81
5 .6 .1.4 Relative Importance Analysis 125
4.2.1.2 Linearity and Homoscedasticity 84
5.6.2 Statistica l Inference 128
4.2.1.3 Transformation 84
5.6.2.1 Test for Multiple R 128
4.2.1.4 Detecting Mult ivariat~ Outliers 84
5.6.2.2 Test of Regression Components 129
4.2.1.5 Variables Causing Cases to Be Outliers 86
5.6.2.3 Test of Added Subset of IVs 130
4.2.1.6 Multicollinearity 88
5.6.2.4 Confidence Limits 130
4.2.2 Screening Grou ped Data 88 5.6.2.5 Comparing Two Sets of Predictors 131
42.21 Accuracy of lnpu~ Missing 0.1ta,
Distributions, Homogeneity of Variance, 5.63 Adjustment of R2 132
and Univariate Outliers 89 5.6.4 Suppressor Variables 133
4.2.2.2 Linearity 93 5.6.5 Regression Approach to ANOVA 134
4.22.3 Multivariate Outliers 93 5.6.6 Centering When Interactions
4.2.2.4 Variables Causing Cases to Be Outliers 94 and Powers of IVs Are Included 135
4.2.2.5 Multicollinearity 97 5.6.7 Mediation in Causal Sequence 137
Contents v

5.7 Complete Examples of Regression Analysis 138 6.5.4.3 Specific Comparisons and
5.7.1 Evaluation of Assumptions 139 Trend Analysis ISS
5.7.1.1 Ratio of Cases to IVs 139 6.5.4.4 Effect Size 187
5.7.12 Normality, Linearity, 6.5.5 Alternatives to ANCOVA 187
Homoscedasticity, and 6.6 Complete Example of Analysis of Covariance 189
Independence of Residuals 139 6.6.1 Evaluation of Assumptions 189
5.7.1.3 Outliers 142 6.6.1.1 Unequal 11 and Missing Oat• 189
5.7.1.4 Multicollinearity and Singularity 144 6.6.1.2 Normality 189
5.7.2 Standard Multiple Regression 144 6.6.1.3 Linearity 191
5.7.3 Sequential Regression 150 6.6.1.4 Outliers 191
5.7.4 Example of Standard Multiple 6.6.1.5 Multicollinearity and Singularity 192
Regression with Missing Values 6.6.1.6 Homogeneity of Variance 192
Multiply Imputed 154 6.6.1.7 Homogeneity of Regression 193
5.8 Comparison of Programs 162 6.6.1.8 Reliability of Covariates 193
5.8.1 ffiM SPSS Package 163 6.6.2 Analysis of Covariance 193
5.8.2 SAS System 165 6.6.2.1 Main Analysis 193
5.8.3 SYSTATSystem 166 6.6.2.2 Evaluation of Covariates 196
6.6.2.3 Homogeneity of Regression Run 196
6 Analysis of Covariance 167 6.7 Comparison of Programs 200
6.1 General Purpose and Description 167 6.7.1 IBM SPSS Package 200
6.7.2 SAS System 200
6.2 Kinds of Research Questions 170
6.7.3 SYSTAT System 200
6.2.1 Main Effects of !Vs 170
6.2.2 Interactions Among IVs 170
6.2.3 Specific Comparisons and Trend
7 Multivariate Analysis of
Analysis 170 Variance and Covariance 203
6.2.4 Effects of Covariates 170 7.1 General Purpose and Description 203
6.2.5 Effect Size 171 7 2 Kinds of Research Questions 206
6.2.6 Parameter Estimates 171 7.2.1 Main Effects ofiVs 206
6.3 Limitations to Analysis of Covariance 171 7.2.2 Interactions Among IVs 207
6.3.1 Theoretical Issues 171 7.2.3 Importance of DVs 207
6.3.2 Practical Issues 172 7.2.4 Parameter Estimates 207
6.3.2.1 Unequal Sample Sizes, Missing 7.2.5 Specific Comparisons
Data, and Ratio of Cases to IVs 172 and Trend Analysis 207
6.3.2.2 Absence of Outliers 172 7.2.6 Effect Size 208
6.3.2.3 Absence of Multicollinearity
7.2.7 Effects of Covariates 208
and Singularity 172
6.3.2.4 Normality of Sampling Distributions 173 7.2.8 Re peated-Measures Analysis
of Variance 208
6.3.2.5 Homogeneity of Variance 173
6.32.6 Linearity 173 7.3 Limitations to Multivariate Analysis
6.32.7 Homogeneity of Regression 173 of Variance and Covariance 208
6.3.2.8 Reliability of Co,•ariatcs 174 7.3.1 Theoretical Issues 208
6.4 Fundamental Equations for Analysis 7.3.2 Practica I Issues 209
of Covariance 174 7.3.2.1 Unequal Sample Sizes,
Missing Data, and Power 209
6.4.1 Sums of Squares and Cross-Products 175
7.3.2.2 Multivariate Normality 210
6.4.2 Significance Test and Effect Size 177
7.3.2.3 Absence of Outliers 210
6.4.3 Computer Analyses of Small-Sample
7.3.2.4 Homogeneity of Variance-
Example 178 Covariance Matrices 210
6.5 Some Important Issues 179 7.3.2.5 Linearity 211
6.5.1 Choosing Covariates 179 7.3.2.6 Homogeneity of Regression 211
6.5.2 Evaluation of Covariales 180 7.3.2.7 Reliability of Covariates 211
6.5.3 Test for Homogeneity of Regression 180 7.3.2.8 Absence of Multicollinearity
and Singularity 211
6.5.4 Design Complexity 181
65.4.1 Wiiliin-Subjects and Mixed 7.4 Fundamental Equations for Multivariate
Wiiliin-Between Designs 181 Analysis of Variance and Covariance 212
6.5.42 Unequal Sample Sizes 182 7.4.1 Multivariate Analysis of Variance 212
vi Contents

7.4.2 Computer Analyses 8.3.2.2 Multivariate Normality 260


of Small-Sample Example 218 8.3.2.3 Absence of Outliers 260
7.4.3 Multivariate Analysis 8.3.2.4 Homogeneity of
of Covariance 221 Variance-Covariance Matrices 260
7.5 Some Important Issues 223 8.3.2.5 Linearity 260
8.3.2.6 Absence of Multicollinearity
7.5.1 MANOVA Versus ANOVAs 223
and Singularity 260
7.5.2 Criteria for Statistical Inference 223
8.4 Fundamenta l Equations for Profile Ana lysis 260
7.5.3 Assessing DVs 224
8.4.1 Differences in Levels 262
7.5.3.1 Univariate F 224
8.4.2 Paral lelism 262
7.5.3.2 Roy-Bargmann Stepdown Analysis 226
7.5.3.3 Using Discriminant Analysis 226 8.4.3 Flatness 265
7.5.3.4 Choosing Among Strategies 8.4.4 Computer Analyses of Small-Sample
for Assessing DVs 227 Example 266
7.5.4 Specific Comparisons and Trend 8.5 Some Important Issues 269
Analysis 227 8.5.1 Univariate Versus Multivariate
7.5.5 Design Complexity 228 Approach to Repeated Measures 269
7.5.5.1 Within-Subjects and Between- 8.5.2 Contrasts in Profile Analysis 270
Within Designs 228 8.5.2.1 P=llelism and Flatness
7.5.5.2 Unequal Sample Sizes 228 Significant, Levels Not Significant
7.6 Complete Examples of Multivariate (Simple-Effects Analysis) 272
Ana lysis of Variance and Covariance 230 8.5.2.2 ParatleUsm and Levels Significant,
Flatness Not Significant
7.6.1 Evaluation of Assumptions 230 (Simple-Effects Analysis) 274
7.6.1.1 Unequal Sample Sizes 8.5.2.3 Parallelism, Levels, and Flatness
and Missing Data 230 Significant (Interaction Contrasts) 275
7.6.1.2 Multivariate Normality 231 8.5.2.4 Only Parallelism Significant 276
7.6.1.3 Linearity 231 8.5.3 Doubly Multivariate Designs 277
7.6.1.4 Outliers 232
8.5.4 Classifying Profiles 279
7.6.1.5 Homogeneity of Variance-
Co,rariance Matrices 233
8.5.5 Imputation of Missing Values 279
7.6.1.6 Homogeneity of Regression 233 8.6 Complete Examples of Profile Analysis 280
7.6.1.7 Reliability of Covariates 235 8.6.1 Profile Analysis of Subscales
7.6.1.8 Multicollinearity and Singularity 235 of the WISC 280
7.6.2 Multivariate Ana lysis of Variance 235 8.6.1. 1 Evaluation of Assumptions 280
8.6.1.2 Profile Analysis 282
7.6.3 Multiva riate Ana lysis of Covariance 244
7.6.3.1 Assessing Covariates 244 8.6.2 Doubly Multivar iate Ana lysis
of Reaction Time 288
7.6.3.2 Assessing DVs 245
8.6.2.1 Evaluation of Assumptions 288
7.7 Comparison of Programs 252
8.6.2.2 Doubly Multivariate Analysis
7.7.1 IBMSPSSPackage 252 of Slope and Intercept 290
7.7.2 SAS System 255 8.7 Comparison of Programs 297
7.7.3 SYSTAT System 255 8.7.1 IBM SPSS Package 297
8.7.2 SAS System 298
8 Profile Analysis: The Multivariate 8.7.3 SYSTATSystem 298
Approach to Repeated Measures 256
8.1 Genera l Purpose and Description 256
9 Discriminant Analysis 299
8.2 Kinds of Research Questions 257 9.1 General Purpose and Description 299
8.2.1 Parallelism of Profiles 258 9.2 Kinds of Research Questions 302
8.2.2 Overall Difference Among Groups 258 9.2.1 Significance of P rediction 302
8.2.3 Flatness of Profiles 258 9.2.2 Number of Significant
8.2.4 Contrasts Following Profile Analysis 258 Discriminant Functions 302
8.2.5 Parameter Estimates 258 9.2.3 Dimensions of Discrimination 302
8.2.6 Effect Size 259 9.2.4 Classification Functions 303
8.3 limitations to Profile Analysis 259 9.2.5 Adequacy of Classification 303
8.3.1 Theoretical Issues 259 9.2.6 Effect Size 303
8.3.2 Practical Issues 259 9.2.7 Importance of Predictor Variables 303
8.3.2.1 Sample Size, Missing Data, 9.2.8 Significance of Prediction with Covariates 304
and Power 259 9.2.9 Estimation of Group Means 304
Contents v ii

9.3 Limitations to Discriminant Analysis 304


9.3.1 Theoretical issues 304
10 Logistic Regression 346
9.3.2 Practical issues 304 10.1 General Purpose and Description 346
9.3.2.1 Unequal Sample Sizes, Missing 10.2 Kinds of Research Questions 348
Data, and Power 304 10.2.1 Prediction of Group Membership
9.3.2.2 Multivariate Normality 305 or Ou tcome 348
9.3.2.3 Absence of Outliers 305
10.2.2 Importance of Predictors 348
9.324 ~lomogeneity of
305 10.2.3 Interactions Among Predictors 349
Varianc:e--Covariance Matrices
9.3.2.5 Linearity 306 10.2.4 Parameter Estimates 349
9.3.2.6 Absence of Multicollinearity 10.2.5 Classification of Cases 349
and Stngularity 306 10.2.6 Significance of Prediction with
9.4 Fundamental Equations for Covariates 349
Discriminant Analysis 306 10.2.7 Effect Size 349
9.4.1 Derivation and Test of 10.3 Limitations to Logistic Regression Analysis 350
Discriminant Functions 10.3.1 Theoretical Issues 350
9.4.2 C lassification 309 10.3.2 Practical Issues 350
9.4.3 Computer Analyses of 10.3.2.1 Ratio of Cases to Variables 350
Small-Sample Example 311 103.2.2 Adequacy of Expected
9.5 Types of Discrim inant Analyses 315 Frequencies nnd Power 351
9.5.1 Di rect Discrimin a nt Ana lysis 315 10.3.2.3 Linearity in the Logit 351
10.3.2.4 Absence of Multicollinearity 351
9.5.2 Sequential Discrimi nan t Analysis 315
10.3.2.5 Absence of Outliers in the Solution 351
9.5.3 Stepwise (Sta tistical) Discriminant
10.3.2.6 Independence of Errors 352
Ana lysis 316
10.4 Fundamenta l Equations for
9.6 Some Important Issues 316
Logistic Regression 352
9.6.1 Statistical lnference 316
10.4.1 Testing and Interpreting Coefficien ts 353
9.6.1.1 Criteria for Overall Statistical
Significance 317 10.4.2 Goodness of Fit 354
9.6. 1.2 Stepping Methods 317 10.4.3 Comparing Models 355
9.6.2 Number of Discriminant Functions 317 10.4.4 Interpretation and Analysis of
9.6.3 Interpreting Discriminant Functions 318 Residuals 355
9.6.3.1 Discriminant Function Plots 318 10.4.5 Computer Analyses of
9.6.3.2 Structure Matrix of Loadings 318 Small-Sample Example 356
9.6.4 Evaluating Predictor Variables 320 105 Types of Logistic Regression 360
9.6.5 Effect Size 321 105.1 Direct Logistic Regression 360
9.6.6 Design Complexity: Factorial Designs 321 105.2 Sequential Logistic Regression 360
9.6.7 Use of Classification Procedures 322 10.5.3 Statistical (Stepwise) Logistic
9.6.7.1 Cross-Validation and New Cases 322 Regression 362
9.6.7.2 jackknifed Classification 323 105.4 Probit and Other Analyses 362
9.6.7.3 Evaluating Improvement in 10.6 Some Important Issues 363
Classification 323 10.6.1 Statistical Inference 363
9.7 Complete Example of Discriminant Analysis 324 10.6.1.1 Assessing Goodness of Fit
9.7.1 Evaluation of Assumptions 325 of Models 363
9.7.1.1 Unequa l Sample Sizes 10.6.1.2 Tests of Ind ividual Predictors 365
and Missing Data 325 10.6.2 Effect Sizes 365
9.7. 1.2Multiva ria te Normality 325 10.6.2.1 Effect Size fo r n Model 365
9.7.1.3 Linea rity 325 10.6.2.2 Effect Sizes for l'n.>dictors 366
9.7.1.4 Outliers 325 10.6.3 lnterprelalion of Coefficien ts
9.7.1.5 l lomogeneity of Variance- Using Odds 367
Covariance Matrices 326 10.6.4 Coding Outcome and Predictor
9.7.1.6 Multicollinearity and Singularity 327 Ca tegories 368
9.7.2 Direct Discrin1inant Analysis 327 10.6.5 Number and Type of Outcome
9.8 Comparison of Programs 340 Categories 369
9.8.1 IBM SPSS Package 344 10.6.6 Classification of Cases 3n
9.8.2 SAS System 344 10.6.7 Hierarchical and Nonhierarchical
9.8.3 SYSTATSystem 345 Analysis
viii Contents

10.6.8 Importance of Predictors 373 11.4.2 Standard Error of Cumulative


10.6.9 Logistic Regression for Ma tched Proportion Surviving 408
Groups 374 11.4.3 Hazard and Density Functions 408
10.7 Complete Examples of Logistic Regression 374 11.4.4 Plot of Life Tables 409
10.7.1 Evaluation of Limitations 374 11.4.5 Test for Group Differences 410
t0.7. t.t Ratio of Cases to Variables 11.4.6 Computer Analyses of Small-Sample
and Missing Data 374 Example 411
t0.7. 1.2 Multicollinearity 376 11.5 Types of Survival Analyses 415
t0.7.1.3 Outliers in the Solution 376 11.5.1 Actuarial and Product-Limit Ufe
10.7.2 Direct Logistic Regression with Tables and Survivor Functions 415
Two-Category Outcome and 11.5.2 Prediction of Group Survival Times
Continuous Predictors 377 from Covariates 417
10.7.2.1 Umitation: Unearity in the L.ogit 377 11.5.2.1 Direct, Sequential,
10.7.2.2 Direct Logistic Regression with and Statistical Analysis 417
Two-Category Outcome 377 11.5.2.2 Cox Proportional-Hazards Model 417
10.7.3 Sequential Logistic Regression 11.5.2.3 Accelerated Failure-Time Models 419
with Three Categories of Outcome 384 11.5.2.4 ChOObing a Method 423
10.7.3.1 Limitations of Multinomial
Logistic Regression 384 11.6 Some Important Issues 423
10.7.3.2 Sequential Multinomial 11.6.1 Proportionality of Hazards 423
Logistic Regression 387 11.6.2 Censored Data 424
10.8 Comparison of Programs 396 11.6.2.1 Right-Censored Data 425
10.8.1 IBM SPSS Package 396 11.6.2.2 Other Forms of Censoring 425
10.8.2 SAS Syste m 399 11.6.3 Effect Size and Power 425
10.8.3 SYSTAT System 400 11.6.4 Statistical C riteria 426
11.6.4.1 Test Statistics for Group
Differences in Survival Functions 426
11 Survival/Failure Analysis 401 11.6.4.2 Test Statistics for Prediction
from Covariates 427
11.1 General Purpose and Description 401
11.6.5 Predicting Survival Rate 427
11.2 Kinds of Research Questions 403
11.65.1 Regression Coefficients
11.2.1 Proportions Surviving at (Parameter Estim.1tes) 427
Various Times 403
t 1.65.2 Ha.tard Ratios 427
11.2.2 Group Differences in Survival 403 11.65.3 Expected Survival Rates 428
11.2.3 Survival Time with Covariates 403
11.7 Complete Example of Survival Analysis 429
11.2.3.1 Treatment Effects 403
11.7.1 Evaluation of Assumptions 430
11.2.3.2 Importance of Covariates 403
11.7.1.1 Accuracy of Input, Adequacy
11.2.3.3 Parameter Estimates 404 of Sample Size, Missing Data,
11.2.3.4 Contingencies Among and Di>tributions 430
Covariates 404 11.7.1.2 Outliers 430
11.2.3.5 Effect Size and Power 404 11.7.1.3 Differences Between
11.3 Limitations to Survival Analysis 404 Withdrawn and Remaining
11.3.1 Theoretical Issues 404 Cases 433
11.7.1.4 Change in Survival
11.3.2 Practical Issues 404
Experience over Time 433
11.3.2.1 Snmple Size nnd
Missing Data 404
11.7.1.5 Proportionality of llazards 433
11.3.2.2 Nonnality of Sampling 11.7.1.6 MulticollineMity 434
Dis tributions, Linearity, and 11.7.2 Cox Regression S urviva l Analysis 436
l lomoscedasticity 405 11.7.2.1 Effect of Drug Treatment 436
11.3.2.3 Absence of Outliers 405 11.7.2.2 Evaluation of Other
11.3.2.4 Differences Between Cova riates 436
Withdrt1wn and Remaining 11.8 Comparison of Programs 440
Cases 405
11.8.1 SAS System 444
11.3.2.5 Change in Survival
Conditions over 1ime 405 11.8.2 IBM SPSS Package 445
11.3.2.6 Proportionality of Hazards 405 11.8.3 SYSTATSystem 445
11.3.2.7 Absence of Multicollinearity 405
11.4 Fundamental Equations for
12 Canonical Correlation 446
Survival Analysis 405 12.1 General Purpose and Description 446
11.4 .1 Life Tables 406 12.2 Kinds of Research Questions 448
Contents ix

12.2.1 Number of Canonical Variate Pairs 448 133.2.5 Absence of Multicollinearity


12.2.2 Interpretation of Canonical Variates 448 and Singularity 482
13.3.2.6 Factorability of R 482
12.2.3 Importance of Canonical Variates
and Predictors 448 133 .2.7 Absence of Outliers Among
Variables 483
12.2.4 Canonical Variate Scores 449
13.4 Fundamental Equations for Factor
12.3 Limitations 449
Analysis 483
12.3.1 Theoretical Limitations 449
13.4.1 Extraction 485
12.3.2 Practical Issues 450
13.4.2 Orthogonal Rotation 487
12.3.2.1 Ratio of Cases to IVs 450
13.4.3 Communalities, Variance, and
12.3.2.2 Normality, Linearity, and
Covariance 488
Homoscedasticity 450
123.23 Missing Data 451 13.4.4 Factor Scores 489
123.24 Absence of Outliers 451 13.4.5 Oblique Rotation 491
123.2.5 Absence of Multicollinearity 13.4.6 Computer Analyses of
and Singularity 451 Small-Sample Example 493
12.4 Fundamental Equations for 13.5 Major Types of Factor Analyses 496
Canonical Correlation 451 135.1 Factor Extraction Techniques 496
12.4.1 Eigenvalues and Eigenvectors 452 13.5.1.1 PCA Versus FA 496
12.4.2 Matrix Equations 454 13.5.1.2 Principal Components 498
12.4.3 Proportions of Variance Extracted 457 t3.5.1.3 Principal Factors 498
12.4.4 Computer A nalyses of t3.5.1.4 Image Factor Extraction 498
Sm all-Sam ple Example 458 t3.5.1.5 Maximum Likelihood
Factor Extraction 499
12.5 Some ln1portant Issues 462
13.5.1.6 Unweighted Least
12.5.1 Importance of Canonical Variates 462 Squares Factoring 499
12.5.2 Interpretation of Canonical Variates 463 13.5.1.7 Generalized (Weighted)
12.6 Complete Example of Canonical Correlation 463 Least Squares Factoring 499
13.5.1.8 Alpha Factoring 499
12.6.1 Evaluation of Assumptions 463
12.6. 1.1 Missing Data 463 135.2 Rotation 500
12.6.1.2 Normality. Linearity, and 13.5.2.1 Orthogonal Rotation 500
Homoscedasticity 463 13.5.2.2 Oblique Rotation 501
12.6.1.3 Outliers 466 13.5.2.3 Geometric Interpretation 502
12.6.1.4 Multicoll inearity 13.5.3 Some Practical Recommendations 503
and Singularity 467 13.6 Some Important Issues 504
12.6.2 Can o nical Correia tion 467 13.6.1 Estimates of Communalities 504
12.7 Comparison of Programs 473 13.6.2 Adequacy of Extraction and
12.7.1 SAS System 473 Number of Factors 504
12.7.2 ffiM SPSS Package 474 13.6.3 Adequacy of Rotation and
12.7.3 SYSTATSystem 475 Simple Structure 507
13.6.4 Importance and Internal
13 Princip al Components Consistency of Factors 508
13.6.5 Interpretation of Factors 509
and Factor Analysis 476
13.6.6 Factor Scores 510
13.1 General Purpose and Description 476 13.6.7 Comparisons Among Solutions
13.2 Kinds of Research Questions 479 and Groups 511
13.2.1 Number of Factors 479 13.7 Complete Example of FA 511
13.2.2 Nature o f Factors 479 13.7.1 Eva luation o f Limitations 511
13.2.3 Importance of Solutions and Factors 480 13.7.1.1 Sample Size and
Missing Data 512
13.2.4 Testing Theory in FA 480
13.7.1.2 Normality 512
13.2.5 Estimating Scores on Factors 480
13.7.1.3 Linearity 512
13.3 Limitations 480 13.7.1.4 Outliers 513
13.3.1 Theoretical Issues 480 13.7.1.5 Multicollinearity
13.3.2 Practical Issues 481 and Singularity 514
133.21 Sample Size and Missing Data 481 13.7.1.6 Factorability of R 514
13.3.2.2 Normality 482 13.7.1.7 Outliers Among Variables 515
13.3.2.3 Linearity 482 13.7.2 Principal Factors Extraction with
13.3.2.4 Absence of Outliers Among Cases 482 Varimax Rotation 515
x Contents

13.8 Comparison of Programs 525 14.5.3 .3 Indices of Proportion


527 of Variance Accounted 562
13.8.1 IBM SPSS Package
527 t4.5.3.4 Degree of Parsimony
13.8.2 SAS System
Fit Indices 563
13.8.3 SYSTAT System 527 563
14.5.3.5 Residual-Based Fit Indices
14.5.3.6 Choosing Among Fit Indices 56-1
14 Structural Equation Modeling 14.5.4 Model Modification 564
by Jodie B. Ullman 528 t4.5.4. L Chi-Square Difference Test 564
14.1 Genera l Purpose and Description 528 t4.5.4.2 Lilgrange Multiplier (LM) Test 565
t4.5.4.3 Wald Test 569
14.2 Kinds of Research Questions 531
14.5.4.4 Some Caveats and Hints on
14.2.1 Adequacy of the Model 531 570
Model Modification
14.2.2 Testing Theory 532 570
14.5.5 Reliability and Proportion of Variance
14.2.3 Amount of Variance in the Variables 14.5.6 Discrete and Ordinal Data 571
Accounted for by the Factors 532
1-1.5.7Multiple Croup Models 572
14.2.4 Reliability of the Indicators 532
1-1.5.8Mean and Covariance Structure
14.2.5 Parameter Estimates 532 573
Models
14.2.6 Intervening Variables 532
14.6 Complete Examples of Structural Equation
14.2.7 Croup Differences 532 574
Modeling Analysis
14.2.8 Longitudinal Differences 533 14.6.1 Confirmatory Factor Analysis
14.2.9 Multilevel Modeling 533 of the WISC 574
14.2.10 Latent Class Ana lysis 533 t4.6.1.1 Model Specifica tion for CFA 574
14.3 Limitations to Structural Equation 14.6.1.2 Evaluation of Assumptions
Modeling 533 forCFA 574
533 14.6 .1.3 CFA Model Estimation and
14.3.1 Theoretical Issues
Preliminary Evaluation 576
14.3.2 Practical Issues 534
14.6. 1.4 Model Modification 583
14.3.21 SampleSizeand
534 1-1.6.2 SEM of Health Data 589
Missing Data
14.3.2.2 Multivariate Norm.1lity 14.6.2. 1 SE..\<1 Model Specification 589
=d~tl~ ~ 14.6.2.2 E'•aluation of Assumptions
14.3.2.3 Linearity 535 forSEM 591
14.6.2.3 SEM Model Estimation and
14.3.2.4 Absence of Multicollinearity
and Singularity 535 Preliminary Evaluation 593
143.2.5 Residuals 535
t4.6.2.4 Model Modification 596

14.4 Fundamen tal Equations for Slructural 14.7 Compa rison of Programs 607
Equations Modeling 535 14.7.1 EQS 607
14.4.1 Covariance Algebra 535 14.7.2 LlSREL 607
14.4.2 Model Hypotheses 537 14.7.3 AMOS 612
14.4.3 Model Specification 538 14.7.4 SAS System 612
14.4.4 Model Estimation 540
14.45 Model Evaluation 543 15 Multilevel Linear Modeling 613
14.4.6 Computer Analysis of
545 15.1 General Purpose and Description 613
Small-Sample Example
555 15.2 Kinds of Research Questions 616
14.5 Some Important Issues
14.5.1 Model Identification 555 15.2.1 Croup Differences in Means 616
557 15.2.2 Croup Differences in Slopes 616
14.5.2 Estimation Techniques
14.5.2.1 Estimation Methods 15.2.3 Cross-Level Interactions 616
and Sample Size 559 15.2.4 Meta-Analysis 616
14.5.2.2 Estimation Methods 15.2.5 Relative Strength of Predictors
and Nonnormality 559 at Various Levels 617
14.5.2.3 Estimation Metllods 15.2.6 Individual and Croup Structure 617
and Dependence 559
15.2.7 Effect Size 617
14.5.24 Some Recommendations
for Choice of Estim.1tion
15.2.8 Path Analysis at Individual
Metllod 560 and Croup Levels 617
14.5.3 Assessing the Fit of the Model 560 15.2.9 Analysis of Longitudinal Data 617
14.53.1 Comparative Fit Indices 560 15.2.10 Multilevel Logistic Regression 618
14.5.3.2 Absolute Fit Index 562 15.2.11 Multiple Response Analysis 618
Contents xi

15.3 Limitations to Multilevel Linear Modeling 618 15.7.1.1 Sample Sizes, Missing
15.3.1 Theoretical Issues 618 Data, and Distributions 656
618 15.7.1.2 Outliers 659
15.3.2 Practical Issues
15.3.2.1 Sample SUe, Unequal-11, 15.7.1.3 Multicollinearity
and Singularity 659
and l\1issing Data 619
15.7.1.4 Independence of Errors:
15.3.2.2 Independence of Errors 619
lntracLlss Correlations 659
15.3.2.3 Absence of Multicollinearity
and Singularity 620 15.7.2 Multilevel Modeling 661
15.4 Fundamental Equations 620 15.8 Comparison of Programs 668
15.4.1 Intercepts-Only Model 623 15.8.1 SAS System 668
15.4.1.1 The lnlercep~y Model: 15.8.2 IBM SPSS Package 670
Level-l Equation 623 15.8.3 HLM Program 671
15.4.1.2 The Intercepts-Only Model: 15.8.4 MlwiN Program 671
Level-2 Equation 623
15.8.5 SYSTATSystem 671
15.4.1.3 Computer Analyses
of Intercepts-Only Model 624
15.4.2 Model with a First-Level Predictor 627 16 Multiway Frequency Analysis 672
15.4.2.1 Level-l Equation fora 16.1 General Purpose and Description 672
Model with a Level-l
1'1\.>dictor 627 16.2 Kinds of Resea rch Questions 673
15.4.2.2 Level-2 Equations for a 16.2.1 Associations Among Variables 673
Model with a Level-l 16.2.2 Effect on a Dependent Variable 674
Pl\.>dictor 628 16.2.3 Parameter Estimates 674
15.4.2.3 Computer Analysis of a
Model with a Level-l 16.2.4 Importance of Effects 674
Predictor 630 16.2.5 Effect Size 674
15.4.3 Model with Predictors a t First 16.2.6 Specific Comparisons and
and Second Levels 633 Trend Analysis 674
15.4.3.1 Level-l Equation for 16.3 Limitations to Multiway Frequency Analysis 675
Model with Predictors at 16.3.1 Theoretical Issues 675
Both Levels 633
16.3.2 Practical Issues 675
15.4.3.2 Level-2 Equations for
Model with Predictors 16.3.2.1 Independence 675
at Both Levels 633 16.3.2.2 Ratio of Cases to Variables 675
15.4.3.3 Computer Analyses of 16.3.2.3 Adequacy of Expected
Model with Predictors at Frequencies 675
First and Second Le,·els 634 16.3.2.4 Absence of Outliers in the
15.5 Types of ML\11 638 Solution 676
15.5.1 Repeated Measures 638 16.4 Fundamental Equations for Multiway
15.5.2 Higher-Order ML\11 642 Frequency Analysis 676
15.5.3 Latent Variables 642 16.4.1 Screening for Effects 678
16.4.1.1 Total Effect 678
15.5.4 Nonnormal Outcome Variables 643
16.4.1.2 First-Order Effects 679
15.5.5 Multiple Response Models 644
16.4.1.3 Second-Order Effects 679
15.6 Some Important Issues 644
16.4.1.4 Third-Order Effect 683
15.6.1 lntraclass Correlation 644
16.4.2 Modeling 683
15.6.2 Centering Predictors and Changes 16.4.3 Eva luation and Interpretation 685
in Their In terpretations 646
16.4.3.1 Residuals 685
15.6.3 Interactions 648 16.4.3.2 J>aramctcr Estimates 686
15.6.4 Random and Fixed Intercepts 16.4.4 Compu ter Ana lyses of Small-Sa mple
and Slopes 648 Example 690
15.6.5 Statistical Inference 651
16.5 Some Important Issues 695
15.6.5.1 Assessing Models 651
16.5.1 Hierarchical and Nonhierarchical
15.6.5.2 Tests of Individual Effects 652
Models 695
15.6.6 Effect Size 653
16.5.2 Statistical Criteria 696
15.6.7 Estimation Techniques and 16.5.2.1 Tests of Models 696
Convergence Problems 653
16.5.2.2 Tests of Individual Effects 696
15.6.8 Exploratory Model Building 654
16.5.3 Strategies for Choosing a Model 696
15.7 Complete Example of MLM 655 16.5.3.1 IBM SPSS Hll.OGLINEAR
15.7.1 Evaluation of Assumptions 656 (Hierarchial) 697
xii Contents

16.5.3.2 IB~ SPSS GENLOG 17.5.21 Abrupt, Permanent Effects 741


(General Log-Linear) 697 17.5.2.2 Abrupt, Temporary Effects 742
16.5.3.3 SASCATMODand IB~t 17.5.2.3 Gradual, Permanent Effects 745
SPSS LOCUNEAR (General 17.5.2.4 Models with Multiple Interventions 746
Log-Linear) 697
17.5.3 Adding Continuous Variables 747
16.6 Complete Example of Multiway
17.6 Some Important Issues 748
Frequency Analysis 698
17.6.1 PatternsofACFsandPACFs 748
16.6.1 Evaluation of Assumptions:
Adequacy of Expected Frequencies 698 17.6.2 Effect Size 751
16.6.2 Hierarchical Log-Linear Ana lysis 700 17.6.3 Forecasting 752
16.6.2.1 Preliminary Model Screening 700 17.6.4 Statistical Methods for Comparing
16.6.2.2 Stepwise Model Selection 702 Two Models 752
16.6.2.3 Adequacy of Fit 702 17.7 Complete Examples of Tune-Series
16.6.24 Interpretation of the Analysis 753
Selected Model 705 17.7.1 Time-Series Analysis of
16.7 Comparison of Programs no Introduction of Seat Belt Law 753
16.7.1 IBM SPSS Package no 17.7. 1. 1 E'•aluation of Assumptions 754
16.7.2 SASSystem 712 17.7. 1.2 Baseline Model
Identification and
16.7.3 SYSTAT System 713 Estimation 755
17.7.1.3 Baseline Model Diagnosis 758
17 Time-Series Analysis 714 17.7.1.4 Intervention Analysis 758
17.1 Genera l Purpose and Description 714 17.7.2. Time-Series Analysis of
17.2 Kinds of Research Questions 716 Introduction of a Dashboard to
an Educational Computer Game 762
17.2.1 Pattern of Autocorrelation 717
17.7.2.1 Evaluation of Assumptions 763
17.2.2 Seasonal Cycles and Trends 717 17.7.2.2 Baseline Model Identification
17.2.3 Forecasting 717 and Diagnosis 765
17.2.4 Effect of an Jntenrention 718 17.7 .2.3 Intervention Analysis 766
17.25 Comparing Tune Series 718 17.8 Comparison of Programs n1
17.2.6 Tune Series with Covaria tes 718 17.8.1 IBM SPSS Package n1
17.2.7 Effect Size and Power 718 17.8.2 SASSystem n4
17.3 Assumptions of Time-Series Ana lysis 718 17.8.3 SYSTAT System n4
17.3.1 Theoretical Issues 718
17.3.2 Practical Issues 718 18 An Overview of the General
17.3.2.1 Normality of DistributiOltS Linear Model 775
of Residua Is 719
I7.3.2.2 Homogeneity of Variance 18.1 Linearity and the General linear Model n5
and Zero Mean of Residuals 719 18.2 Bivariate to Multivariate Statistics
17.3.23 Independence of Residuals 719 and Overview of Techniques n5
17.3.24 Absence of Outliers 719 18.2.1 Bivariate Form n5
17.3.2.5 Sample Size and Missing Data 719 18.2.2 Simple Multivariate Form m
17.4 Fundamental Equations for 18.2.3 Full Multivariate Form 778
Tune-Series ARIMA Models no 18.3 Alternative Research Strategies 782
17.4.1 Identification of A RIMA
(p, d, q) Models no Appendix A
17.4.1.1 Trend Components, d: Making
the Process Stationary 721 A Skimpy Introduction to
17.4.1.2 Auto-Regressive Components 722 Matrix Algebra 783
17.4.1.3 Moving Average Components 724
17.4.1.4 Mixed Models 724 A.1 The Trace of a Matrix 784
17.4.1.5 ACFs and PACFs 724 A.2 Addition or Subtraction of a
17.4.2 Estimating Model Parameters n9 Constant to a Matrix 784
17.4.3 Diagnosing a Model n9 A.3 Multiplication or Division of a
17.4.4 Computer Analysis of Small-Sample Matrix by a Constant 784
Tune-Series Example 734 A.4 Addition and Subtraction
175 Types of Tune-Series Analyses 737 of Two Matrices 785
17.5.1 Models with Seasonal Components 737 A.5 Multiplication, Transposes, and Square
1 7.5.2 Models with Interventions 738 Roots of Matrice 785
Contents xiii

A.6 Matrix "Division" (Inverses and B.7 Impact of Seat Belt Law 795
Determinants) 786 B.8 The Selene Online Educational Game 796
A.7 Eigenvalues and Eigenvectors:
Procedures for Consolidating Variance Appendix C
from a Matrix 788
Statistical Tables 797
C.l Normal Curve Areas 798
Appendix B C.2 Critical Values of the t Distribution
Research Designs for Complete for a = .05 and .01, Two-Tailed Test 799
C.3 Cri tical Values of the F Distribution 800
Examples 791 C.4 Critical Values of Chi Square (r) 804
B.1 Women's Health and Drug Study 791 c.s Critical Values for Squares Multiple
B.2 Sexual Attraction Study 793 Correlation (R~ in Forward Stepwise
B.3 Learning Disabilities Data Bank 794 Selection: a = .05 805
B.4 Reaction Ttme to Identify Figures 794 C.6 Critical Values for F~1AX (S2~1AX/S2~iiN)
B.S Field Studies of Noise-Induced Sleep Distribution for a = .05 and .01 807
Disturbance 795
B.6 Clinical Trial for Primary Biliary References 808
Cirrhosis 795 Index 815
Preface
ome good things seem to go on forever: friendship and updating this book. It is d iffi-

S cult to be lieve that the firs t ed ition manuscript was typewritten, with real cu tting and
pasting. The pub lisher required a paper manuscrip t w ith numbered pages-that was
almost our downfa ll. We cou ld write a book on multivariate statistics, bu t we couldn' t get the
same numbe r of pages (abou t 1200, doub le-spaced) twice in a row. SPSS was in release 9.0,
and the o ther p rogram we d emonstrated was BMDP. There were a mere 11 chapters, of which
6 of them were describing techniques. Multilevel and structural equation modeling were not
yet ready for prime time. Logistic regression and survival analysis were not yet popular.
Ma terial new to this edition includes a redo of all SAS examples, with a p retty new output
forma t and replacement of interactive analyses that are no longer available. We've also re-run
the IBM SPSS examples to show the new ou tput format. We've tried to update the references in
all chapters, including only classic citations if they d ate prior to 2000. New work on rela tive im-
portance has been incorpora ted in multiple regression, canonical correlation, and logistic regres-
s ion analysis-complete with d emonstrations. Multiple imputation procedu res for dealing with
missing data have been updated, and we've added a new time-series example, ta king ad vantage
of an IBM SPSS expert modeler that replaces p revious tea-leaf read ing aspects of the analysis.
Our goals in writing the book remain the same as in all previous ed itions-to p resent com-
plex s tatistical procedures in a way tha t is maximally useful and accessible to researchers who
are not necessarily statisticians. We strive to be short on theory but long on conceptual under-
s tanding. The statistical packages have become increasingly easy to use, making it all the more
critical to make sure that they a re applied w ith a good understanding of what they can and
cannot do. But above all else-what does it all mean?
We have not changed the basic format underlying all of the technique chapters, now 14 of
them. We start with an overview of the technique, followed by the types of research questions
the techniques are designed to answer. We then p rovide the cautionary tale-what you need to
worry about and how to deal with those worries. Then come the fundamenta l equa tions underly-
ing the technique, which some readers truly enjoy working through (we know because they help-
fully point out any errors and/ or inconsistencies they find); but other read ers discover they can
skim (or skip) the section without any loss to their ability to conduct meaningful ana lysis of their
research. The fundamental equations are in the context of a small, made-up, usually silly data set
for which compu ter analyses are p rovided- usually IBM SPSS and SAS. Next, we delve into is-
sues surrounding the technique (such as different types of the analysis, follow-up procedures to
the main analysis, and effect size, if it is not amply covered elsewhere). Finally, we provide one or
two full-bore analyses of an actual rea l-life data set together with a Results section appropria te for
a journal. Data sets for these examples are available at www.pearsontughered.com in IBM SPSS,
SAS, and ASCTI formats. We end each technique chapter with a comparison of features available
in IBM SPSS, SAS, SYSTAT and sometimes other specialized p rograms. SYSTAT is a statis tical
package that we reluctantly had to d rop a few editions ago for lack of space.
We apologize in advance for the heft of the book; it is not our intention to line the cof-
fers of cruropractors, p h ysical therapists, acupuncturists, and the like, but there's really just so
much to say. As to our friendship, it's still going strong despite living in d ifferent cities. Art has
taken the place of creating belly dance costumes for both of us, but we remain silly in outlook,
although serious in our analysis of research.
The lineup of people to thank grows with each ed ition, far too extensive to lis t: students,
reviewers, ed itors, and readers who send us corrections and point ou t areas of confusion. As
always, we ta ke full responsibility for remaining errors and lack of clarity.

Barbnrn G. Tabachnick
Linda S. Fidel/
xiv

You might also like