Quasi Experimentation A Guide To Design and Analysis T - Compress
Quasi Experimentation A Guide To Design and Analysis T - Compress
This series provides applied researchers and students with analysis and research
design books that emphasize the use of methods to answer research questions.
Rather than emphasizing statistical theory, each volume in the series illustrates
when a technique should (and should not) be used and how the output from
available software programs should (and should not) be interpreted. Common
pitfalls as well as areas of further development are clearly articulated.
RECENT VOLUMES
Charles S. Reichardt
Research is all about drawing valid conclusions that inform policy and
practice. The randomized clinical trial (RCT) has evolved as the gold standard
for drawing causal inferences but it really isn’t the golden chariot of valid
inference. It’s not fool’s gold either—it’s a sound design; but, thankfully,
researchers do have other options, and sometimes these other options are
better suited for a specific research question, particularly in field settings.
Chip Reichardt brings you the wonderful world of valid and useful designs
that, when properly implemented, provide accurate findings. His book is a
delightful guide to the fundamental logic in this other world of inferential
research designs—the quasi-experimental world.
As Reichardt indicates, the distinction between experimental and
nonexperimental or quasi-experimental is more in the thoughtfulness with
which the designs are implemented and in the proper application of the
analytics that each design requires. Even RCTs can yield improper
conclusions when they are degraded by factors such as selective attrition, local
treatment effects, treatment noncompliance, variable treatment fidelity, and
the like, particularly when implemented in field settings such as schools,
clinics, and communities. Reichardt brings a thoughtful and practical
discussion of all the issues you need to consider to demonstrate as best as
possible the counterfactual that is a hallmark of accurate inference.
I like the word verisimilitude—the truthlike value of a study’s results.
When you take Reichardt’s advice and implement his tips, your research will
benefit by having the greatest extent of verisimilitude. In this delightfully
penned book, Reichardt shares his vast state-of-the craft understanding for
valid conclusions using all manner of inferential design. Theoretical
approaches to inferential designs have matured considerably, particularly
when modern missing-data treatments and best-practice statistical methods
are employed. Having studied and written extensively on these designs,
Reichardt is at the top of the mountain when it comes to understanding and
sharing his insights on these matters. But he does it so effortlessly and
accessibly. This book is the kind you could incorporate into undergraduate
curricula where a second course in design and statistics might be offered. For
sure it is a “must” at the graduate level, and even seasoned researchers would
benefit from the modernization that Reichardt brings to the inferential
designs he covers.
Beyond the thoughtful insights, tips, and wisdom that Reichardt brings to
the designs, his book is extra rich with pedagogical features. He is a very
gifted educator and expertly guides you through the numbered equations
using clear and simple language. He does the same when he guides you
through the output from analysis derived from each of the designs he covers.
Putting accessible words to numbers and core concepts is one of his super
powers, which you will see throughout the book as well as in the glossary of
key terms and ideas he compiled. His many and varied examples are engaging
because they span many disciplines. They provide a comprehensive
grounding in how the designs can be tailored to address critical questions
with which we all can resonate.
Given that the type of research Reichardt covers here is fundamentally
about social justice (identifying treatment effects as accurately as possible), if
we follow his lead, our findings will change policy and practice to ultimately
improve people’s lives. Reichardt has given us this gift; I ask that you pay it
forward by following his lead in the research you conduct. You will find his
sage advice and guidance invaluable. As Reichardt says in the Preface,
“Without knowing the varying effects of treatments, we cannot well know if
our theories of behavior are correct or how to intervene to improve the
human condition.” As always, enjoy!
TODD D. LITTLE
Society for Research in Child Development
meeting
Baltimore, Maryland
Preface
Questions about cause and effect are ubiquitous. For example, we often ask
questions such as the following: How effective is a new diet and exercise
program? How likely is it that an innovative medical regimen will cure
cancer? How much does an intensive manpower training program improve
the prospects of the unemployed? How do such effects vary across different
people, settings, times, and outcome measures? Without knowing the varying
effects of treatments, we cannot well know if our theories of behavior are
correct or how to intervene to improve the human condition. Quasi-
experiments are designs frequently used to estimate such effects, and this
book will show you how to use them for that purpose.
This volume explains the logic of both the design of quasi-experiments
and the analysis of the data they produce to provide estimates of treatment
effects that are as credible as can be obtained given the demanding constraints
of research practice. Readers gain both a broad overview of quasi-
experimentation and in-depth treatment of the details of design and analysis.
The book brings together the insights of others that are widely scattered
throughout the literature—along with a few insights of my own. Design and
statistical techniques for a full coverage of quasi-experimentation are
collected in an accessible format, in a single volume, for the first time.
Although the use of quasi-experiments to estimate the effects of
treatments can be highly quantitative and statistical, you will need only a
basic understanding of research methods and statistical inference, up through
multiple regression, to understand the topics covered in this book. Even then,
elementary statistical and methodological topics are reviewed when it would
be helpful. All told, the book’s presentation relies on common sense and
intuition far more than on mathematical machinations. As a result, this book
will make the material easier to understand than if you read the original
literature on your own. My purpose is to illuminate the conceptual
foundation of quasi-experimentation so that you are well equipped to explore
more technical literature for yourself.
While most writing on quasi-experimentation focuses on a few
prototypical research designs, this book covers a wider range of design
options than is available elsewhere. Included among those are research
designs that remove bias from estimates of treatment effects. With an
understanding of the complete typology of design options, you will no longer
need to choose among a few prototypical quasi-experiments but can craft a
unique design to suit your specific research needs. Designing a study to
estimate treatment effects is fundamentally a process of pattern matching. I
provide examples from diverse fields of the means to create the detailed
patterns that make pattern matching most effective.
ACKNOWLEDGMENTS
Title Page
Copyright Page
Dedication
Preface
1 • Introduction
Overview
1.1 Introduction
1.2 The Definition of Quasi-Experiment
1.3 Why Study Quasi-Experiments?
1.4 Overview of the Volume
1.5 Conclusions
1.6 Suggested Reading
3 • Threats to Validity
Overview
3.1 Introduction
3.2 The Size of an Effect
3.2.1 Cause
3.2.2 Participant
3.2.3 Time
3.2.4 Setting
3.2.5 Outcome Measure
3.2.6 The Causal Function
3.3 Construct Validity
3.3.1 Cause
3.3.2 Participant
3.3.3 Time
3.3.4 Setting
3.3.5 Outcome Measure
3.3.6 Taking Account of Threats to Construct Validity
3.4 Internal Validity
3.4.1 Participant
3.4.2 Time
3.4.3 Setting
3.4.4 Outcome Measure
3.5 Statistical Conclusion Validity
3.6 External Validity
3.6.1 Cause
3.6.2 Participant
3.6.3 Time
3.6.4 Setting
3.6.5 Outcome Measure
3.6.6 Achieving External Validity
3.7 Trade-Offs among Types of Validity
3.8 A Focus on Internal and Statistical Conclusion Validity
3.9 Conclusions
3.10 Suggested Reading
4 • Randomized Experiments
Overview
4.1 Introduction
4.2 Between-Groups Randomized Experiments
4.3 Examples of Randomized Experiments Conducted in the Field
4.4 Selection Differences
4.5 Analysis of Data from the Posttest-Only Randomized Experiment
4.6 Analysis of Data from the Pretest–Posttest Randomized Experiment
4.6.1 The Basic ANCOVA Model
4.6.2 The Linear Interaction ANCOVA Model
4.6.3 The Quadratic ANCOVA Model
4.6.4 Blocking and Matching
4.7 Noncompliance with Treatment Assignment
4.7.1 Treatment-as-Received Analysis
4.7.2 Per-Protocol Analysis
4.7.3 Intention-to-Treat or Treatment-as-Assigned Analysis
4.7.4 Complier Average Causal Effect
4.7.5 Randomized Encouragement Designs
4.8 Missing Data and Attrition
4.8.1 Three Types of Missing Data
4.8.2 Three Best Practices
4.8.3 A Conditionally Acceptable Method
4.8.4 Unacceptable Methods
4.8.5 Conclusions about Missing Data
4.9 Cluster-Randomized Experiments
4.9.1 Advantages of Cluster Designs
4.9.2 Hierarchical Analysis of Data from Cluster Designs
4.9.3 Precision and Power of Cluster Designs
4.9.4 Blocking and ANCOVA in Cluster Designs
4.9.5 Nonhierarchical Analysis of Data from Cluster Designs
4.10 Other Threats to Validity in Randomized Experiments
4.11 Strengths and Weaknesses
4.12 Conclusions
6 • Pretest–Posttest Designs
Overview
6.1 Introduction
6.2 Examples of Pretest–Posttest Designs
6.3 Threats to Internal Validity
6.3.1 History (Including Co-Occurring Treatments)
6.3.2 Maturation
6.3.3 Testing
6.3.4 Instrumentation
6.3.5 Selection Differences (Including Attrition)
6.3.6 Cyclical Changes (Including Seasonality)
6.3.7 Regression toward the Mean
6.3.8 Chance
6.4 Design Variations
6.5 Strengths and Weaknesses
6.6 Conclusions
6.7 Suggested Reading
10 • A Typology of Comparisons
Overview
10.1 Introduction
10.2 The Principle of Parallelism
10.3 Comparisons across Participants
10.4 Comparisons across Times
10.5 Comparisons across Settings
10.6 Comparisons across Outcome Measures
10.7 Within- and Between-Subject Designs
10.8 A Typology of Comparisons
10.9 Random Assignment to Treatment Conditions
10.10 Assignment to Treatment Conditions Based on an Explicit Quantitative Ordering
10.11 Nonequivalent Assignment to Treatment Conditions
10.12 Credibility and Ease of Implementation
10.13 The Most Commonly Used Comparisons
10.14 Conclusions
10.15 Suggested Reading
Glossary
References
Author Index
Subject Index
539
Transition, 241, 317
Treatment assignment, 67–76, 72t, 74t
Treatment effect interactions, 9, 317
Treatment effects. See also Outcome measures (O) factor; Size-of-effects factors
bracketing estimates of effects and, 288–290
comparisons and confounds and, 13–15
conventions and, 22–24
counterfactual definition and, 15–17
definition, 317
design elaboration methods and, 284–285
estimating with analysis of covariance (ANCOVA), 56–57
mediation and, 291–295, 292f
moderation and, 295–296
noncompliance with treatment assignment and, 67–68
nonequivalent group design and, 113–114
overview, 1–3, 6–9, 11–13, 12f, 24
pattern matching and, 276–277, 284–285
precision of the estimates of, 52–53
problem of overdetermination and, 21, 301–302
problem of preemption and, 21, 302–303
propensity scores and, 137–138
qualitative research methods and, 297–298
randomized experiments and, 46
regression discontinuity (RD) design and, 173–185, 173f, 174f
reporting of results, 299
research design and, 285–288, 286f, 299
selection differences and, 52
stable-unit-treatment-value assumption (SUTVA) and, 17–19
statistical conclusion validity and, 38
temporal pattern of, 206–208, 208f
threats to internal validity and, 281–283
Treatment-as-assigned analysis, 69–71, 317. See also Intention-to-treat (ITT) analysis
Treatment-as-received approach, 68–69, 317
Treatment-on-the-treated (TOT) effect, 16, 75, 318
True experiments. See Randomized experiments
Two-stage least squares (2SLS) regression
complier average causal effect (CACE) and, 74
540
definition, 318
fuzzy RD design and, 187–188
nonequivalent group design and, 141–142
Uncertainty, 288–289
Uncertainty, degree of, 37–38, 282
Unconfoundedness, 125–126, 318. See also Ignorability
Underfitting the model, 181–183
Unfocused design elaborations. See also Methods of design elaboration
definition, 318
examples of, 273–276
overview, 8, 272–273, 277–278
pattern matching and, 276–277
research design and, 284–285
Units of assignment to treatment conditions, 24, 318
541
regression discontinuity (RD) design and, 199–200
White noise error, 212, 213, 318
Within-subject designs, 250
542
About the Author
543
About Guilford Press
www.guilford.com
Founded in 1973, Guilford Publications, Inc., has built an international
reputation as a publisher of books, periodicals, software, and DVDs in
mental health, education, geography, and research methods. We pride
ourselves on teaming up with authors who are recognized experts, and
who translate their knowledge into vital, needed resources for
practitioners, academics, and general readers. Our dedicated editorial
professionals work closely on each title to produce high-quality content
that readers can rely on. The firm is owned by its founding partners,
President Bob Matloff and Editor-in-Chief Seymour Weingarten, and
many staff members have been at Guilford for more than 20 years.
544
Discover Related Guilford
Books
Sign up to receive e-Alerts with new book news and special offers in your
fields of interest:
http://www.guilford.com/e-alerts.
545