0% found this document useful (0 votes)
201 views29 pages

Quasi Experimentation A Guide To Design and Analysis T - Compress

G

Uploaded by

pqpnbnrm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
201 views29 pages

Quasi Experimentation A Guide To Design and Analysis T - Compress

G

Uploaded by

pqpnbnrm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Methodology in the Social Sciences

David A. Kenny, Founding Editor


Todd D. Little, Series Editor
www.guilford.com/MSS

This series provides applied researchers and students with analysis and research
design books that emphasize the use of methods to answer research questions.
Rather than emphasizing statistical theory, each volume in the series illustrates
when a technique should (and should not) be used and how the output from
available software programs should (and should not) be interpreted. Common
pitfalls as well as areas of further development are clearly articulated.

RECENT VOLUMES

CONFIRMATORY FACTOR ANALYSIS FOR APPLIED RESEARCH,


Second Edition
Timothy A. Brown
PRINCIPLES AND PRACTICE OF STRUCTURAL EQUATION MODELING,
Fourth Edition
Rex B. Kline
HYPOTHESIS TESTING AND MODEL SELECTION IN THE SOCIAL SCIENCES
David L. Weakliem
REGRESSION ANALYSIS AND LINEAR MODELS: Concepts, Applications, and
Implementation
Richard B. Darlington and Andrew F. Hayes
GROWTH MODELING: Structural Equation and Multilevel Modeling Approaches
Kevin J. Grimm, Nilam Ram, and Ryne Estabrook
PSYCHOMETRIC METHODS: Theory into Practice
Larry R. Price

INTRODUCTION TO MEDIATION, MODERATION, AND CONDITIONAL


PROCESS ANALYSIS: A Regression-Based Approach, Second Edition
Andrew F. Hayes
MEASUREMENT THEORY AND APPLICATIONS FOR THE SOCIAL SCIENCES
Deborah L. Bandalos
CONDUCTING PERSONAL NETWORK RESEARCH: A Practical Guide
Christopher McCarty, Miranda J. Lubbers, Raffaele Vacca, and José Luis
Molina
QUASI-EXPERIMENTATION: A Guide to Design and Analysis
Charles S. Reichardt
Quasi-Experimentation
A Guide to Design and Analysis

Charles S. Reichardt

Series Editor’s Note by Todd D. Little

BUTUH LENGKAP HUB


[email protected]
THE GUILFORD PRESS
New York London
Epub Edition ISBN: 9781462540242; Kindle Edition ISBN: 9781462540228

Copyright © 2019 The Guilford Press


A Division of Guilford Publications, Inc.
370 Seventh Avenue, Suite 1200, New York, NY 10001
www.guilford.com

All rights reserved

No part of this book may be reproduced, translated, stored in a retrieval system, or


transmitted, in any form or by any means, electronic, mechanical, photocopying,
microfilming, recording, or otherwise, without written permission from the publisher.

Last digit is print number: 9 8 7 6 5 4 3 2 1

Library of Congress Cataloging-in-Publication Data


Names: Reichardt, Charles S., author.
Title: Quasi-experimentation : a guide to design and analysis / Charles S. Reichardt.
Description: New York : Guilford Press, [2019] | Series: Methodology in the social sciences |
Includes bibliographical references and index.
Identifiers: LCCN 2019017566| ISBN 9781462540204 (pbk.) | ISBN 9781462540259
(hardcover)
Subjects: LCSH: Social sciences—Experiments. | Social sciences—Methodology. |
Experimental design.
Classification: LCC H62 .R4145 2019 | DDC 001.4/34—dc23
LC record available at https://lccn.loc.gov/2019017566
For Stefan, Grace, and Anne
Series Editor’s Note

Research is all about drawing valid conclusions that inform policy and
practice. The randomized clinical trial (RCT) has evolved as the gold standard
for drawing causal inferences but it really isn’t the golden chariot of valid
inference. It’s not fool’s gold either—it’s a sound design; but, thankfully,
researchers do have other options, and sometimes these other options are
better suited for a specific research question, particularly in field settings.
Chip Reichardt brings you the wonderful world of valid and useful designs
that, when properly implemented, provide accurate findings. His book is a
delightful guide to the fundamental logic in this other world of inferential
research designs—the quasi-experimental world.
As Reichardt indicates, the distinction between experimental and
nonexperimental or quasi-experimental is more in the thoughtfulness with
which the designs are implemented and in the proper application of the
analytics that each design requires. Even RCTs can yield improper
conclusions when they are degraded by factors such as selective attrition, local
treatment effects, treatment noncompliance, variable treatment fidelity, and
the like, particularly when implemented in field settings such as schools,
clinics, and communities. Reichardt brings a thoughtful and practical
discussion of all the issues you need to consider to demonstrate as best as
possible the counterfactual that is a hallmark of accurate inference.
I like the word verisimilitude—the truthlike value of a study’s results.
When you take Reichardt’s advice and implement his tips, your research will
benefit by having the greatest extent of verisimilitude. In this delightfully
penned book, Reichardt shares his vast state-of-the craft understanding for
valid conclusions using all manner of inferential design. Theoretical
approaches to inferential designs have matured considerably, particularly
when modern missing-data treatments and best-practice statistical methods
are employed. Having studied and written extensively on these designs,
Reichardt is at the top of the mountain when it comes to understanding and
sharing his insights on these matters. But he does it so effortlessly and
accessibly. This book is the kind you could incorporate into undergraduate
curricula where a second course in design and statistics might be offered. For
sure it is a “must” at the graduate level, and even seasoned researchers would
benefit from the modernization that Reichardt brings to the inferential
designs he covers.
Beyond the thoughtful insights, tips, and wisdom that Reichardt brings to
the designs, his book is extra rich with pedagogical features. He is a very
gifted educator and expertly guides you through the numbered equations
using clear and simple language. He does the same when he guides you
through the output from analysis derived from each of the designs he covers.
Putting accessible words to numbers and core concepts is one of his super
powers, which you will see throughout the book as well as in the glossary of
key terms and ideas he compiled. His many and varied examples are engaging
because they span many disciplines. They provide a comprehensive
grounding in how the designs can be tailored to address critical questions
with which we all can resonate.
Given that the type of research Reichardt covers here is fundamentally
about social justice (identifying treatment effects as accurately as possible), if
we follow his lead, our findings will change policy and practice to ultimately
improve people’s lives. Reichardt has given us this gift; I ask that you pay it
forward by following his lead in the research you conduct. You will find his
sage advice and guidance invaluable. As Reichardt says in the Preface,
“Without knowing the varying effects of treatments, we cannot well know if
our theories of behavior are correct or how to intervene to improve the
human condition.” As always, enjoy!

TODD D. LITTLE
Society for Research in Child Development
meeting
Baltimore, Maryland
Preface

Questions about cause and effect are ubiquitous. For example, we often ask
questions such as the following: How effective is a new diet and exercise
program? How likely is it that an innovative medical regimen will cure
cancer? How much does an intensive manpower training program improve
the prospects of the unemployed? How do such effects vary across different
people, settings, times, and outcome measures? Without knowing the varying
effects of treatments, we cannot well know if our theories of behavior are
correct or how to intervene to improve the human condition. Quasi-
experiments are designs frequently used to estimate such effects, and this
book will show you how to use them for that purpose.
This volume explains the logic of both the design of quasi-experiments
and the analysis of the data they produce to provide estimates of treatment
effects that are as credible as can be obtained given the demanding constraints
of research practice. Readers gain both a broad overview of quasi-
experimentation and in-depth treatment of the details of design and analysis.
The book brings together the insights of others that are widely scattered
throughout the literature—along with a few insights of my own. Design and
statistical techniques for a full coverage of quasi-experimentation are
collected in an accessible format, in a single volume, for the first time.
Although the use of quasi-experiments to estimate the effects of
treatments can be highly quantitative and statistical, you will need only a
basic understanding of research methods and statistical inference, up through
multiple regression, to understand the topics covered in this book. Even then,
elementary statistical and methodological topics are reviewed when it would
be helpful. All told, the book’s presentation relies on common sense and
intuition far more than on mathematical machinations. As a result, this book
will make the material easier to understand than if you read the original
literature on your own. My purpose is to illuminate the conceptual
foundation of quasi-experimentation so that you are well equipped to explore
more technical literature for yourself.
While most writing on quasi-experimentation focuses on a few
prototypical research designs, this book covers a wider range of design
options than is available elsewhere. Included among those are research
designs that remove bias from estimates of treatment effects. With an
understanding of the complete typology of design options, you will no longer
need to choose among a few prototypical quasi-experiments but can craft a
unique design to suit your specific research needs. Designing a study to
estimate treatment effects is fundamentally a process of pattern matching. I
provide examples from diverse fields of the means to create the detailed
patterns that make pattern matching most effective.

ORGANIZATION AND COVERAGE

I begin with a general overview of quasi-experimentation, and then, in


Chapter 2, I define a treatment effect and the hurdles over which one must
leap to draw credible causal inferences. Chapter 3 explains that the effect of a
treatment is a function of five size-of-effect factors: the treatment or cause,
the participants in the study, the times at which the treatments are
implemented and effects assessed, the settings in which the treatments are
implemented and outcomes assessed, and the outcome measures upon which
the effects of the treatment are estimated. Also included is a simplified
perspective on threats to validity. Chapter 4 introduces randomized
experiments because they serve as a benchmark with which to compare quasi-
experiments.
Chapter 5 begins the discussion of alternatives to randomized
experiments by starting with a design that is not even quasi-experimental
because it lacks an explicit comparison of treatment conditions. Chapters 6–9
present four prototypical quasi-experiments: the pretest–posttest design, the
nonequivalent group design, the regression discontinuity design, and the
interrupted time-series design. The threats to internal validity in each design
that can bias the estimate of a treatment are described, along with the
methods for coping with these threats, including both simple and advanced
statistical analyses.
Since researchers need to be able to creatively craft designs to best fit their
specific research settings, Chapter 10 presents a typology of designs for
estimating treatment effects that goes beyond the prototypical designs in
Chapters 6–9. Chapter 11 shows how each of the fundamental design types in
the typology can be elaborated by adding one or more supplementary
comparisons. The purpose of the additional comparisons is to rule out
specific threats to internal validity. Such elaborated designs employ
comparisons that differ in one of four ways: they involve different
participants, times, settings, or outcome measures.
Chapter 12 describes and provides examples of unfocused design
elaboration and explains how unfocused design elaboration can address the
multiple threats to validity that can be present. Chapter 12 also conceptualizes
the process of estimating treatment effects as a task of pattern matching. The
design typology presented in Chapter 10, together with the design
elaborations described in Chapters 11 and 12, provide the tools by which
researchers can well tailor a design pattern to fit the circumstances of the
research setting. The book concludes in Chapter 13 with an examination of
the underlying principles of good design and analysis.
Throughout, the book explicates the strengths and weaknesses of the
many current approaches to the design and statistical analysis of data from
quasi-experiments. Among many other topics, sensitivity analyses and
advanced tactics for addressing inevitable problems, such as missing data and
noncompliance with treatment assignment, are described. Advice and tips on
the use of different design and analysis techniques, such as propensity score
matching, instrumental variable approaches, and local regression techniques,
as well as caveats about interpretation, are also provided. Detailed examples
from diverse disciplinary fields illustrate the techniques, and each
mathematical equation is translated into words.
Whether you are a graduate student or a seasoned researcher, you will
find herein the easiest to understand, most up-to-date, and most
comprehensive coverage of modern approaches to quasi-experimentation.
With these tools, you will be well able to estimate the effects of treatments in
field settings across the range of the social and behavioral sciences.

ACKNOWLEDGMENTS

My thinking about quasi-experimentation has been greatly influenced by


interactions with many people over the years—especially Bob Boruch, Don
Campbell, Tom Cook, Harry Gollob, Gary Henry, Mel Mark, Will Shadish,
Ben Underwood, and Steve West. I benefited greatly from comments on the
manuscript by several initially anonymous reviewers, three of whom are
Manuel González Canché, Felix J. Thoemmes, and Meagan C. Arrastia-
Chisholm. I am especially indebted to Steve West and Keith Widaman for
their exceptionally careful reading of the manuscript and their detailed
comments. C. Deborah Laughton provided invaluable guidance throughout
the work on the volume. My sincere thanks to all.
Contents

Methodology in the Social Sciences

Title Page

Copyright Page

Dedication

Series Editor’s Note

Preface

1 • Introduction
Overview
1.1 Introduction
1.2 The Definition of Quasi-Experiment
1.3 Why Study Quasi-Experiments?
1.4 Overview of the Volume
1.5 Conclusions
1.6 Suggested Reading

2 • Cause and Effect


Overview
2.1 Introduction
2.2 Practical Comparisons and Confounds
2.3 The Counterfactual Definition
2.4 The Stable-Unit-Treatment-Value Assumption
2.5 The Causal Question Being Addressed
2.6 Conventions
2.7 Conclusions
2.8 Suggested Reading

3 • Threats to Validity
Overview
3.1 Introduction
3.2 The Size of an Effect
3.2.1 Cause
3.2.2 Participant
3.2.3 Time
3.2.4 Setting
3.2.5 Outcome Measure
3.2.6 The Causal Function
3.3 Construct Validity
3.3.1 Cause
3.3.2 Participant
3.3.3 Time
3.3.4 Setting
3.3.5 Outcome Measure
3.3.6 Taking Account of Threats to Construct Validity
3.4 Internal Validity
3.4.1 Participant
3.4.2 Time
3.4.3 Setting
3.4.4 Outcome Measure
3.5 Statistical Conclusion Validity
3.6 External Validity
3.6.1 Cause
3.6.2 Participant
3.6.3 Time
3.6.4 Setting
3.6.5 Outcome Measure
3.6.6 Achieving External Validity
3.7 Trade-Offs among Types of Validity
3.8 A Focus on Internal and Statistical Conclusion Validity
3.9 Conclusions
3.10 Suggested Reading

4 • Randomized Experiments
Overview
4.1 Introduction
4.2 Between-Groups Randomized Experiments
4.3 Examples of Randomized Experiments Conducted in the Field
4.4 Selection Differences
4.5 Analysis of Data from the Posttest-Only Randomized Experiment
4.6 Analysis of Data from the Pretest–Posttest Randomized Experiment
4.6.1 The Basic ANCOVA Model
4.6.2 The Linear Interaction ANCOVA Model
4.6.3 The Quadratic ANCOVA Model
4.6.4 Blocking and Matching
4.7 Noncompliance with Treatment Assignment
4.7.1 Treatment-as-Received Analysis
4.7.2 Per-Protocol Analysis
4.7.3 Intention-to-Treat or Treatment-as-Assigned Analysis
4.7.4 Complier Average Causal Effect
4.7.5 Randomized Encouragement Designs
4.8 Missing Data and Attrition
4.8.1 Three Types of Missing Data
4.8.2 Three Best Practices
4.8.3 A Conditionally Acceptable Method
4.8.4 Unacceptable Methods
4.8.5 Conclusions about Missing Data
4.9 Cluster-Randomized Experiments
4.9.1 Advantages of Cluster Designs
4.9.2 Hierarchical Analysis of Data from Cluster Designs
4.9.3 Precision and Power of Cluster Designs
4.9.4 Blocking and ANCOVA in Cluster Designs
4.9.5 Nonhierarchical Analysis of Data from Cluster Designs
4.10 Other Threats to Validity in Randomized Experiments
4.11 Strengths and Weaknesses
4.12 Conclusions

4.13 Suggested Reading

5 • One-Group Posttest-Only Designs


Overview
5.1 Introduction
5.2 Examples of One-Group Posttest-Only Designs
5.3 Strengths and Weaknesses
5.4 Conclusions
5.5 Suggested Reading

6 • Pretest–Posttest Designs
Overview
6.1 Introduction
6.2 Examples of Pretest–Posttest Designs
6.3 Threats to Internal Validity
6.3.1 History (Including Co-Occurring Treatments)
6.3.2 Maturation
6.3.3 Testing
6.3.4 Instrumentation
6.3.5 Selection Differences (Including Attrition)
6.3.6 Cyclical Changes (Including Seasonality)
6.3.7 Regression toward the Mean
6.3.8 Chance
6.4 Design Variations
6.5 Strengths and Weaknesses
6.6 Conclusions
6.7 Suggested Reading

7 • Nonequivalent Group Designs


Overview
7.1 Introduction
7.2 Two Basic Nonequivalent Group Designs
7.3 Change-Score Analysis
7.4 Analysis of Covariance
7.4.1 Hidden Bias
7.4.2 Measurement Error in the Covariates
7.5 Matching and Blocking
7.6 Propensity Scores
7.6.1 Estimating Propensity Scores
7.6.2 Checking Balance
7.6.3 Estimating the Treatment Effect
7.6.4 Bias
7.7 Instrumental Variables
7.8 Selection Models
7.9 Sensitivity Analyses and Tests of Ignorability
7.9.1 Sensitivity Analysis Type I
7.9.2 Sensitivity Analysis Type II
7.9.3 The Problems with Sensitivity Analyses
7.9.4 Tests of Ignorability Using Added Comparisons
7.10 Other Threats to Internal Validity besides Selection Differences
7.11 Alternative Nonequivalent Group Designs
7.11.1 Separate Pretest and Posttest Samples
7.11.2 Cohort Designs
7.11.3 Multiple Comparison Groups
7.11.4 Multiple Outcome Measures
7.11.5 Multiple Pretest Measures over Time
7.11.6 Multiple Treatments over Time
7.12 Empirical Evaluations and Best Practices
7.12.1 Similar Treatment and Comparison Groups
7.12.2 Adjusting for the Selection Differences That Remain
7.12.3 A Rich and Reliable Set of Covariates
7.12.4 Design Supplements
7.13 Strengths and Weaknesses
7.14 Conclusions
7.15 Suggested Reading

8 • Regression Discontinuity Designs


Overview
8.1 Introduction
8.2 The Quantitative Assignment Variable
8.2.1 Assignment Based on Need or Risk
8.2.2 Assignment Based on Merit
8.2.3 Other Types of Assignment
8.2.4 Qualities of the QAV
8.3 Statistical Analysis
8.3.1 Plots of the Data and Preliminary Analyses
8.3.2 Global Regression
8.3.3 Local Regression
8.3.4 Other Approaches
8.4 Fuzzy Regression Discontinuity
8.4.1 Intention-to-Treat Analysis
8.4.2 Complier Average Causal Effect
8.5 Threats to Internal Validity
8.5.1 History (Including Co-Occurring Treatments)
8.5.2 Differential Attrition
8.5.3 Manipulation of the QAV
8.6 Supplemented Designs
8.6.1 Multiple Cutoff Scores
8.6.2 Pretreatment Measures
8.6.3 Nonequivalent Dependent Variables
8.6.4 Nonequivalent Groups
8.6.5 Randomized Experiment Combinations
8.7 Cluster RD designs
8.8 Strengths and Weaknesses
8.8.1 Ease of Implementation
8.8.2 Generalizability of Results
8.8.3 Power and Precision
8.8.4 Credibility of Results
8.9 Conclusions
8.10 Suggested Reading

9 • Interrupted Time-Series Designs


Overview
9.1 Introduction
9.2 The Temporal Pattern of the Treatment Effect
9.3 Two Versions of the Design
9.4 The Statistical Analysis of Data When N = 1
9.4.1 The Number of Time Points (J) Is Large
9.4.2 The Number of Time Points (J) Is Small
9.5 The Statistical Analysis of Data When N Is Large
9.6 Threats to Internal Validity
9.6.1 Maturation
9.6.2 Cyclical Changes (Including Seasonality)
9.6.3 Regression toward the Mean
9.6.4 Testing
9.6.5 History
9.6.6 Instrumentation
9.6.7 Selection Differences (Including Attrition)
9.6.8 Chance
9.7 Design Supplements I: Multiple Interventions
9.7.1 Removed or Reversed Treatment Designs
9.7.2 Repeated Treatment Designs
9.7.3 Designs with Different Treatments
9.8 Design Supplements II: Basic Comparative ITS Designs
9.8.1 When N = 1 in Each Treatment Condition
9.8.2 When N Is Large in Each Treatment Condition
9.8.3 Caveats in Interpreting the Results of CITS Analyses
9.9 Design Supplements III: Comparative ITS Designs with Multiple Treatments
9.10 Single-Case Designs
9.11 Strengths and Weaknesses
9.12 Conclusions
9.13 Suggested Reading

10 • A Typology of Comparisons
Overview
10.1 Introduction
10.2 The Principle of Parallelism
10.3 Comparisons across Participants
10.4 Comparisons across Times
10.5 Comparisons across Settings
10.6 Comparisons across Outcome Measures
10.7 Within- and Between-Subject Designs
10.8 A Typology of Comparisons
10.9 Random Assignment to Treatment Conditions
10.10 Assignment to Treatment Conditions Based on an Explicit Quantitative Ordering
10.11 Nonequivalent Assignment to Treatment Conditions
10.12 Credibility and Ease of Implementation
10.13 The Most Commonly Used Comparisons
10.14 Conclusions
10.15 Suggested Reading

11 • Methods of Design Elaboration


Overview
11.1 Introduction
11.2 Three Methods of Design Elaboration
11.2.1 The Estimate-and-Subtract Method of Design Elaboration
11.2.2 The Vary-the-Size-of-the-Treatment-Effect Method of Design Elaboration
11.2.3 The Vary-the-Size-of-the-Bias Method of Design Elaboration
11.3 The Four Size-of-Effect Factors as Sources for the Two Estimates in Design
Elaboration
11.3.1 Different Participants
11.3.2 Different Times
11.3.3 Different Settings
11.3.4 Different Outcome Measures
11.3.5 Multiple Different Size-of-Effect Factors
11.4 Conclusions
11.5 Suggested Reading

12 • Unfocused Design Elaboration and Pattern Matching


Overview
12.1 Introduction
12.2 Four Examples of Unfocused Design Elaboration
12.3 Pattern Matching
12.4 Conclusions
12.5 Suggested Reading

13 • Principles of Design and Analysis for Estimating Effects


Overview
13.1 Introduction
13.2 Design Trumps Statistics
13.3 Customized Designs
13.4 Threats to Validity
13.5 The Principle of Parallelism
13.6 The Typology of Simple Comparisons
13.7 Pattern Matching and Design Elaborations
13.8 Size of Effects
13.9 Bracketing Estimates of Effects

13.10 Critical Multiplism


13.11 Mediation
13.12 Moderation
13.13 Implementation
13.13.1 Intervention
13.13.2 Participants
13.13.3 Times and Settings
13.13.4 Measurements and Statistical Analyses
13.14 Qualitative Research Methods
13.15 Honest and Open Reporting of Results
13.16 Conclusions
13.17 Suggested Reading

Appendix: The Problems of Overdetermination and


Preemption

Glossary

References

Author Index

Subject Index

About the Author

About Guilford Press

Discover Related Guilford Books


interrupted time-series (ITS) design and, 220–222, 223–225, 230–231, 239
maturation and, 102–103
methods of design elaboration and, 259–265
noncompliance with treatment assignment and, 67
nonequivalent group design and, 148–150
overview, 6–7, 26–28, 43
pattern matching and, 276
pretest-posttest design and, 101–107, 110–111
prioritizing internal and statistical conclusion validity and, 42–43
randomized experiments and, 89–91
regression discontinuity (RD) design and, 188–190
regression toward the mean, 106–107
research design and, 280, 281–283
seasonality, 105–106
selection differences, 105
size of effect and, 28–31
statistical conclusion validity, 37–38
testing effects, 103–104
trade-offs between the types of validity and, 42–43
unfocused design elaborations and, 275, 278
Tie-breaking randomized experiment, 194, 317
Time (T) factor. See also Size-of-effects factors
assignment based on an explicit quantitative ordering and, 253
causal function and, 30–31
comparisons and, 248–249, 251–252, 251t, 257, 258
construct validity, 33–34, 35
estimates in design elaboration, 267
external validity, 39–40
internal validity, 37
interrupted time-series (ITS) design and, 204–206, 205f, 206f
multiple different size-of-effects factors and, 269–270
nonequivalent assignment to treatment conditions and, 255
overview, 26, 29
principle of parallelism and, 247–248
random assignment to treatment conditions and, 252
research design and, 297
temporal pattern of treatment effects, 206–208, 208f

539
Transition, 241, 317
Treatment assignment, 67–76, 72t, 74t
Treatment effect interactions, 9, 317
Treatment effects. See also Outcome measures (O) factor; Size-of-effects factors
bracketing estimates of effects and, 288–290
comparisons and confounds and, 13–15
conventions and, 22–24
counterfactual definition and, 15–17
definition, 317
design elaboration methods and, 284–285
estimating with analysis of covariance (ANCOVA), 56–57
mediation and, 291–295, 292f
moderation and, 295–296
noncompliance with treatment assignment and, 67–68
nonequivalent group design and, 113–114
overview, 1–3, 6–9, 11–13, 12f, 24
pattern matching and, 276–277, 284–285
precision of the estimates of, 52–53
problem of overdetermination and, 21, 301–302
problem of preemption and, 21, 302–303
propensity scores and, 137–138
qualitative research methods and, 297–298
randomized experiments and, 46
regression discontinuity (RD) design and, 173–185, 173f, 174f
reporting of results, 299
research design and, 285–288, 286f, 299
selection differences and, 52
stable-unit-treatment-value assumption (SUTVA) and, 17–19
statistical conclusion validity and, 38
temporal pattern of, 206–208, 208f
threats to internal validity and, 281–283
Treatment-as-assigned analysis, 69–71, 317. See also Intention-to-treat (ITT) analysis
Treatment-as-received approach, 68–69, 317
Treatment-on-the-treated (TOT) effect, 16, 75, 318
True experiments. See Randomized experiments
Two-stage least squares (2SLS) regression
complier average causal effect (CACE) and, 74

540
definition, 318
fuzzy RD design and, 187–188
nonequivalent group design and, 141–142

Uncertainty, 288–289
Uncertainty, degree of, 37–38, 282
Unconfoundedness, 125–126, 318. See also Ignorability
Underfitting the model, 181–183
Unfocused design elaborations. See also Methods of design elaboration
definition, 318
examples of, 273–276
overview, 8, 272–273, 277–278
pattern matching and, 276–277
research design and, 284–285
Units of assignment to treatment conditions, 24, 318

Validity. See also Internal validity; Threats to validity


overview, 6–7
pretest-posttest design and, 101–107, 110–111
randomized experiments and, 89–91
research design and, 280, 281–283
size of effect and, 28–31
trade-offs between the types of, 42–43
Variance inflation factor (VIF), 181–182, 318
Vary-the-size-of-the-bias method of design elaboration, 264–265, 284, 318. See also
Methods of design elaboration
Vary-the-size-of-the-treatment-effect method of design elaboration, 262–264, 284, 318. See
also Methods of design elaboration

Wait-list comparison group, 91


Wald (1940) estimator, 74
What Works Clearinghouse (WWC)
interrupted time-series (ITS) design and, 241
nonequivalent group design and, 161
overview, 77
quantitative assignment variable (QAV) and, 171
randomized experiments and, 92–93

541
regression discontinuity (RD) design and, 199–200
White noise error, 212, 213, 318
Within-subject designs, 250

542
About the Author

Charles S. Reichardt, PhD, is Professor of Psychology at the University of


Denver. He is an elected Fellow of the American Psychological Society, an
elected member of the Society of Multivariate Experimental Psychology, and
a recipient of the Robert Perloff President’s Prize from the Evaluation
Research Society and the Jeffrey S. Tanaka Award from the Society of
Multivariate Experimental Psychology. Dr. Reichardt’s research focuses on
quasi-experimentation.

543
About Guilford Press
www.guilford.com
Founded in 1973, Guilford Publications, Inc., has built an international
reputation as a publisher of books, periodicals, software, and DVDs in
mental health, education, geography, and research methods. We pride
ourselves on teaming up with authors who are recognized experts, and
who translate their knowledge into vital, needed resources for
practitioners, academics, and general readers. Our dedicated editorial
professionals work closely on each title to produce high-quality content
that readers can rely on. The firm is owned by its founding partners,
President Bob Matloff and Editor-in-Chief Seymour Weingarten, and
many staff members have been at Guilford for more than 20 years.

544
Discover Related Guilford
Books
Sign up to receive e-Alerts with new book news and special offers in your
fields of interest:
http://www.guilford.com/e-alerts.

545

You might also like