nrn3475 p031

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

ANALYSIS

Power failure: why small sample


size undermines the reliability of
neuroscience
Katherine S. Button1,2, John P. A. Ioannidis3, Claire Mokrysz1, Brian A. Nosek4,
Jonathan Flint5, Emma S. J. Robinson6 and Marcus R. Munafò1
Abstract | A study with low statistical power has a reduced chance of detecting a true effect,
but it is less well appreciated that low power also reduces the likelihood that a statistically
significant result reflects a true effect. Here, we show that the average statistical power of
studies in the neurosciences is very low. The consequences of this include overestimates of
effect size and low reproducibility of results. There are also ethical dimensions to this
problem, as unreliable research is inefficient and wasteful. Improving reproducibility in
neuroscience is a key priority and requires attention to well-established but often ignored
methodological principles.

It has been claimed and demonstrated that many (and low sample size of studies, small effects or both) nega-
possibly most) of the conclusions drawn from biomedi- tively affects the likelihood that a nominally statistically
cal research are probably false1. A central cause for this significant finding actually reflects a true effect. We dis-
important problem is that researchers must publish in cuss the problems that arise when low-powered research
order to succeed, and publishing is a highly competitive designs are pervasive. In general, these problems can be
enterprise, with certain kinds of findings more likely to divided into two categories. The first concerns prob-
be published than others. Research that produces novel lems that are mathematically expected to arise even if
results, statistically significant results (that is, typically the research conducted is otherwise perfect: in other
p < 0.05) and seemingly ‘clean’ results is more likely to be words, when there are no biases that tend to create sta-
1
School of Experimental
Psychology, University of
published2,3. As a consequence, researchers have strong tistically significant (that is, ‘positive’) results that are
Bristol, Bristol, BS8 1TU, UK. incentives to engage in research practices that make spurious. The second category concerns problems that
2
School of Social and their findings publishable quickly, even if those prac- reflect biases that tend to co‑occur with studies of low
Community Medicine, tices reduce the likelihood that the findings reflect a true power or that become worse in small, underpowered
University of Bristol,
(that is, non-null) effect 4. Such practices include using studies. We next empirically show that statistical power
Bristol, BS8 2BN, UK.
3
Stanford University School of flexible study designs and flexible statistical analyses is typically low in the field of neuroscience by using evi-
Medicine, Stanford, and running small studies with low statistical power 1,5. dence from a range of subfields within the neuroscience
California 94305, USA. A simulation of genetic association studies showed literature. We illustrate that low statistical power is an
4
Department of Psychology, that a typical dataset would generate at least one false endemic problem in neuroscience and discuss the impli-
University of Virginia,
Charlottesville,
positive result almost 97% of the time6, and two efforts cations of this for interpreting the results of individual
Virginia 22904, USA. to replicate promising findings in biomedicine reveal studies.
5
Wellcome Trust Centre for replication rates of 25% or less7,8. Given that these pub-
Human Genetics, University of lishing biases are pervasive across scientific practice, it Low power in the absence of other biases
Oxford, Oxford, OX3 7BN, UK.
is possible that false positives heavily contaminate the Three main problems contribute to producing unreliable
6
School of Physiology and
Pharmacology, University of neuroscience literature as well, and this problem may findings in studies with low power, even when all other
Bristol, Bristol, BS8 1TD, UK. affect at least as much, if not even more so, the most research practices are ideal. They are: the low probability of
Correspondence to M.R.M. prominent journals9,10. finding true effects; the low positive predictive value (PPV;
e-mail: marcus.munafo@ Here, we focus on one major aspect of the problem: see BOX 1 for definitions of key statistical terms) when an
bristol.ac.uk
doi:10.1038/nrn3475
low statistical power. The relationship between study effect is claimed; and an exaggerated estimate of the mag-
Published online 10 April 2013 power and the veracity of the resulting finding is nitude of the effect when a true effect is discovered. Here,
Corrected online 15 April 2013 under-appreciated. Low statistical power (because of we discuss these problems in more detail.

NATURE REVIEWS | NEUROSCIENCE VOLUME 14 | MAY 2013 | 365

© 2013 Macmillan Publishers Limited. All rights reserved


A N A LY S I S

Table 1 (cont.) | Characteristics of included meta-analyses


Study k N Summary effect size Power Refs
Median (range) Cohen’s d OR Random or Median
fixed effects (range)
Yang 3 51 (18–205) 0.67 NA 0.65 (0.27–1.00) 67
Yuan 14 116.5 (19–1178) 4.98 Fixed 0.92 (0.33–1.00) 68
Zafar 8 78.5 (46–483) 1.07* Random 0.05 (0.00–0.06) 69
Zhang 12 337.5 (39–901) 1.27 Random 0.14 (0.01–0.30) 70
Zhu 8 110 (48–371) 0.84 Random 0.97 (0.81–1.00) 71
The choice of fixed or random effects model was made by the original authors of the meta-analysis. k, number of studies; NA, not
available; OR, odds ratio. * indicates the relative risk.

for water maze and radial maze, respectively. Our results The estimates shown in FIGS 4,5 are likely to be opti-
indicate that the median statistical power for the water mistic, however, because they assume that statistical
maze studies and the radial maze studies to detect these power and R are the only considerations in determin-
medium to large effects was 18% and 31%, respectively ing the probability that a research finding reflects a true
(TABLE 2). The average sample size in these studies was 22 effect. As we have already discussed, several other biases
animals for the water maze and 24 for the radial maze are also likely to reduce the probability that a research
experiments. Studies of this size can only detect very finding reflects a true effect. Moreover, the summary
large effects (d = 1.20 for n = 22, and d = 1.26 for n = 24) effect size estimates that we used to determine the statis-
with 80% power — far larger than those indicated by tical power of individual studies are themselves likely to
the meta-analyses. These animal model studies were be inflated owing to bias — our excess of significance test
therefore severely underpowered to detect the summary provided clear evidence for this. Therefore, the average
effects indicated by the meta-analyses. Furthermore, the statistical power of studies in our analysis may in fact be
summary effects are likely to be inflated estimates of the even lower than the 8–31% range we observed.
true effects, given the problems associated with small
studies described above. Ethical implications. Low average power in neuro-
The results described in this section are based on science studies also has ethical implications. In our
only two meta-analyses, and we should be appropriately analysis of animal model studies, the average sample
cautious in extrapolating from this limited evidence. size of 22 animals for the water maze experiments was
Nevertheless, it is notable that the results are so con- only sufficient to detect an effect size of d = 1.26 with
sistent with those observed in other fields, such as the
neuroimaging and neuroscience studies that we have
described above. 16
14 30
Implications 12 25
Implications for the likelihood that a research finding 10
20

%
reflects a true effect. Our results indicate that the aver- 8
N

15
age statistical power of studies in the field of neurosci- 6
10
ence is probably no more than between ~8% and ~31%, 4
2 5
on the basis of evidence from diverse subfields within
0 0
neuro-science. If the low average power we observed
10

0
00
–2

–3

–4

–5

–6

–7

–8

–9

across these studies is typical of the neuroscience lit-


0–

–1
11

21

31

41

51

61

71

81
91

erature as a whole, this has profound implications for Power (%)


the field. A major implication is that the likelihood that Figure 3 | Median power of studies included in
any nominally significant finding actually reflects a true neuroscience meta-analyses.Nature Reviews | Neuroscience
The figure shows a
effect is small. As explained above, the probability that histogram of median study power calculated for each of
a research finding reflects a true effect (PPV) decreases the n = 49 meta-analyses included in our analysis, with the
as statistical power decreases for any given pre-study number of meta-analyses (N) on the left axis and percent
odds (R) and a fixed type I error level. It is easy to show of meta-analyses (%) on the right axis. There is a clear
the impact that this is likely to have on the reliability of bimodal distribution; n = 15 (31%) of the meta-analyses
findings. FIGURE 4 shows how the PPV changes for a range comprised studies with median power of less than 11%,
whereas n = 7 (14%) comprised studies with high average
of values for R and for a range of v alues for the average
power in excess of 90%. Despite this bimodality, most
power in a field. For effects that are genuinely non-null, meta-analyses comprised studies with low statistical
FIG. 5 shows the degree to which an effect size estimate power: n = 28 (57%) had median study power of less than
is likely to be inflated in initial studies — owing to the 31%. The meta-analyses (n = 7) that comprised studies
winner’s curse phenomenon — for a range of values for with high average power in excess of 90% had their
statistical power. broadly neurological subject matter in common.

NATURE REVIEWS | NEUROSCIENCE VOLUME 14 | MAY 2013 | 371

© 2013 Macmillan Publishers Limited. All rights reserved


A N A LY S I S

Table 2 | Sample size required to detect sex differences in water maze and radial maze performance
Total animals Required N per study Typical N per study Detectable effect for typical N
used
80% power 95% power Mean Median 80% power 95% power
Water maze 420 134 220 22 20 d = 1.26 d = 1.62
Radial maze 514 68 112 24 20 d = 1.20 d = 1.54
Meta-analysis indicated an effect size of Cohen’s d = 0.49 for water maze studies and d = 0.69 for radial maze studies.

80% power, and the average sample size of 24 animals experiments, the total numbers of animals actually used
for the radial maze experiments was only sufficient to in the studies contributing to the meta-analyses were
detect an effect size of d = 1.20. In order to achieve 80% even larger: 420 for the water maze experiments and
power to detect, in a single study, the most probable true 514 for the radial maze experiments.
effects as indicated by the meta-analysis, a sample size There is ongoing debate regarding the appropriate
of 134 animals would be required for the water maze balance to strike between using as few animals as possi-
experiment (assuming an effect size of d = 0.49) and ble in experiments and the need to obtain robust, reliable
68 animals for the radial maze experiment (assuming findings. We argue that it is important to appreciate the
an effect size of d = 0.69); to achieve 95% power, these waste associated with an underpowered study — even a
sample sizes would need to increase to 220 and 112, study that achieves only 80% power still presents a 20%
respectively. What is particularly striking, however, is possibility that the animals have been sacrificed with-
the inefficiency of a continued reliance on small sample out the study detecting the underlying true effect. If the
sizes. Despite the apparently large numbers of animals average power in neuroscience animal model studies is
required to achieve acceptable statistical power in these between 20–30%, as we observed in our analysis above,
the ethical implications are clear.
Low power therefore has an ethical dimension —
100 unreliable research is inefficient and wasteful. This applies
to both human and animal research. The principles of the
80 ‘three Rs’ in animal research (reduce, refine and replace)83
Post-study probability (%)

require appropriate experimental design and statistics


— both too many and too few animals present an issue
60
as they reduce the value of research outputs. A require-
ment for sample size and power calculation is included
40 in the Animal Research: Reporting In Vivo Experiments
80% power (ARRIVE) guidelines84, but such calculations require a
20 30% power clear appreciation of the expected magnitude of effects
10% power being sought.
0 Of course, it is also wasteful to continue data col-
0 0.2 0.4 0.6 0.8 1.0
lection once it is clear that the effect being sought does
not exist or is too small to be of interest. That is, studies
Pre-study odds R
are not just wasteful when they stop too early, they are
Figure 4 | Positive predictive value as a function of the also wasteful when they stop too late. Planned, sequen-
pre-study odds of associationNature Reviews | Neuroscience
for different levels of tial analyses are sometimes used in large clinical trials
statistical power. The probability that a research finding when there is considerable expense or potential harm
reflects a true effect — also known as the positive associated with testing participants. Clinical trials may
predictive value (PPV) — depends on both the pre-study be stopped prematurely in the case of serious adverse
odds of the effect being true (the ratio R of ‘true effects’
effects, clear beneficial effects (in which case it would be
over ‘null effects’ in the scientific field) and the study’s
statistical power. The PPV can be calculated for given unethical to continue to allocate participants to a placebo
values of statistical power (1 – β), pre-study odds ratio (R) condition) or if the interim effects are so unimpressive
and type I error rate (α), using the formula PPV = ([1 – β] × R) that any prospect of a positive result with the planned
⁄ ([1− β] × R + α). The median statistical power of studies in sample size is extremely unlikely 85. Within a significance
the neuroscience field is optimistically estimated to be testing framework, such interim analyses — and the pro-
between ~8% and ~31%. The figure illustrates how low tocol for stopping — must be planned for the assump-
statistical power consistent with this estimated range tions of significance testing to hold. Concerns have been
(that is, between 10% and 30%) detrimentally affects the raised as to whether stopping trials early is ever justified
association between the probability that a finding reflects given the tendency for such a practice to produce inflated
a true effect (PPV) and pre-study odds, assuming α = 0.05.
effect size estimates86. Furthermore, the decision process
Compared with conditions of appropriate statistical
power (that is, 80%), the probability that a research finding around stopping is not often fully disclosed, increasing
reflects a true effect is greatly reduced for 10% and 30% the scope for researcher degrees of freedom86. Alternative
power, especially if pre-study odds are low. Notably, in an approaches exist. For example, within a Bayesian frame-
exploratory research field such as much of neuroscience, work, one can monitor the Bayes factor and simply stop
the pre-study odds are often low. testing when the evidence is conclusive or when resources

372 | MAY 2013 | VOLUME 14 www.nature.com/reviews/neuro

© 2013 Macmillan Publishers Limited. All rights reserved

You might also like