0% found this document useful (0 votes)
91 views6 pages

Effect Sizes For Paired Data Should Use The Change Score Variability Rather Than The Pre-Test Variability

Uploaded by

Husan Thapa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views6 pages

Effect Sizes For Paired Data Should Use The Change Score Variability Rather Than The Pre-Test Variability

Uploaded by

Husan Thapa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

BRIEF REVIEW

EFFECT SIZES FOR PAIRED DATA SHOULD USE THE


CHANGE SCORE VARIABILITY RATHER THAN THE PRE-
TEST VARIABILITY
SCOTT J. DANKEL AND JEREMY P. LOENNEKE
Kevser Ermin Applied Physiology Laboratory, Department of Health, Exercise Science, and Recreation Management, The
University of Mississippi, University, Mississippi

ABSTRACT INTRODUCTION

N
Dankel, SJ and Loenneke, JP. Effect sizes for paired data umerous meta-analyses are computed within
should use the change score variability rather than the pre-test the exercise science literature, and the efficacy
variability. J Strength Cond Res 35(6): 1773–1778, 2021— of the intervention is determined by calculating
Effect sizes provide a universal statistic detailing the magnitude an effect size. There are several effect size meas-
of an effect while removing the influence of the sample size. ures that are commonly computed such as Cohen’s d, Hed-
ges G, and Glass delta. Importantly, all these measures are
Effect sizes and statistical tests are closely related with the
computed in a manner that details the magnitude of change
exception that the effect size illustrates the magnitude of an
(i.e., the difference between groups) relative to the SD of the
effect in SD units, whereas the test statistic illustrates the
population being tested (i.e., the pre-test SD or the pooled
magnitude of effect in SE units. Avoiding statistical jargon,
pre-test and post-test SDs). These effect size measures that
we illustrate why calculations of effect sizes on paired data are made relative to the pre-test SD are intended to be used
within the sports and exercise science literature are repeatedly when dealing with independent groups (i.e., unpaired data).
performed incorrectly using the variability of the study sample This is apparent, given that the calculation for the test sta-
as opposed to the variability of the actual intervention. Statis- tistic (i.e., the t statistic) when computing an independent t-
tics and examples are provided to illustrate why effect sizes are test is the mean difference between groups divided by the
being calculated incorrectly. The calculation of effect sizes pooled SE of the 2 groups (t statistic = mean difference/
when examining paired data supports the results of the test pooled SE). In the case of an independent t-test, each indi-
statistic, but only when the effect size calculation is made rel- vidual is only measured once, and as such, the SE within
ative to the variability of the intervention (i.e., the change score each group is dependent on the level of between-subject
SD) because this is what is used for the calculation of the test variability that exists within each group. This between-
statistic. Effect size calculations that are made on paired data subject variability is then pooled together to create the
should be made relative to the SD of the change score denominator of the test statistic, and thus the final test sta-
because this provides the information of the statistical test tistic details how many SE units group 1 is from group 2. In
other words, the effect size is the same calculation to that of
while removing the influence of the sample size. After all, we
the test statistic except it deals with SD units as opposed to
are interested in how variable the intervention is rather than
SE units and is intended to detail the magnitude of the effect
how variable the sample population is. Effect size calculations
while removing the influence of the sample size. This does
that are made on pre-test/post-test designs should be calcu-
so because the SE is calculated as the SD divided by the
lated as the change score divided by the SD of the change square root of the sample size.
score. Interestingly, researchers within the exercise science litera-
ture almost exclusively analyze paired data in the form of pre-
KEY WORDS Cohen’s d, intervention, meta-analyses, paired t-
test and post-test designs, such that each individual is measured
test, SD, statistics
multiple times (twice in the case of a traditional pre-test and
post-test design). This changes the calculation of the test
statistic because now the between-subject variability is irrele-
vant, given that we are only interested in comparing how the
Address correspondence to Dr. Jeremy P. Loenneke, [email protected]. post-test compares with the pre-test. The test statistic for
35(6)/1773–1778 a paired t-test can be calculated as the mean difference divided
Journal of Strength and Conditioning Research by the standard error difference. The denominator in this case
Ó 2018 National Strength and Conditioning Association is not the pooled pre-test and post-test SD because the test is

VOLUME 35 | NUMBER 6 | JUNE 2021 | 1773

Copyright © 2018 National Strength and Conditioning Association. Unauthorized reproduction of this article is prohibited.
Choosing the Correct Effect Size

Figure 1. Current effect size calculations are based on the variability in the study population. Illustration of how the pre-test and post-test SDs do not provide
any information about the variability of the actual intervention. A) Interventions 1 and 2 both have the same pre-test and post-test SDs and also have the same
pre-test to post-test mean change. B) Intervention 1 produced similar results across all individuals, whereas the results from intervention 2 are highly variable. C)
The change score and variability of the change score details that intervention 1 produced much more consistent results in comparison to intervention 2. As such,
intervention 1 was statistically significant and had a large effect size when made relative to the change score SD, whereas intervention 2 was not statistically
significant and had a small effect size when made relative to the change score SD. Despite this, the incorrect, yet commonly used effect sizes that are based on
the pre-test SD detail a similar magnitude of effect. Cohen’s d was calculated as the difference in means divided by the pooled pre- and post-test SDs; Glass
delta was calculated as the difference in means divided by the pre-test SD; Hedges g was calculated as an unbiased version of Cohen’s d to account for small
sample sizes. The r value represents the correlation between the pre-test and post-test values.

not dealing with 2 groups but rather one group. Notably, divid- when examining paired data such as Cohen’s d, Glass delta,
ing the mean change by the pooled pre-test and post-test SD and Hedges g. Cohen’s d was calculated as the difference in
will produce equivalent results to that of using the change score means divided by the pooled pre- and post-test SDs; Glass
SD, but only when the correlation between the pre-test and delta was calculated as the difference in means divided by
post-test values is 0.5 (11). If the correlation is greater than 0.5 the pre-test SD; Hedges g was calculated as an unbiased
(which is often the case within the exercise science literature), version of Cohen’s d to account for small sample sizes. Nota-
using the change score SD will produce a greater effect size bly, all these calculations would be intended for unpaired
value, given that the change score SD will be smaller. However, data because they are made relative to the variability in
if the SD of the change score is known, there is no need to the sample population.
calculate the correlation between the pre-test and post-test
values because it would simply be used to calculate the change Procedures and Statistical Analyses
score SD. Avoiding statistical jargon, the purpose of this article In the figures, we have included the pre-test values, post-test
is to provide examples detailing why effect size calculations on values, and the correlation between the pre-test and post-test
paired data should be made relative to the variability of the values. In addition, we created a variable called the change
change score as opposed to the variability of the pre-test. This score. This variable can be calculated as the post-test
is of importance, given the prevalence of using the between- score minus the pre-test score for each individual. The
subjects effect sizes when examining within-subjects data, appropriate effect size value (i.e., Cohen’s dz) can be calculated
which has been done in this journal numerous times within as the mean change score divided by the SD of the change
this current year (2018) (1–3,6,8,12–15). score (10). This is very similar to the calculation of Glass delta
except for the fact that the mean difference is divided by the
METHODS change score SD as opposed to the pre-test SD. Next, the test
Experimental Approach to the Problem statistic (i.e., the t statistic) is calculated as the change score
A data set was created to help illustrate why the specific divided by the SE of the change score. The t statistic only
effect size calculation should be dependent on the type of differs from the appropriate effect size calculation because the
data that is being examined (i.e., paired vs. unpaired). We t statistic uses the SE, whereas the effect size calculation uses
provide effect size calculations that are commonly used the SD. Because the SE can be calculated as the SD divided by
the TM

1774 Journal of Strength and Conditioning Research

Copyright © 2018 National Strength and Conditioning Association. Unauthorized reproduction of this article is prohibited.
the TM

Journal of Strength and Conditioning Research | www.nsca.com

Figure 2. Current effect size calculations can show different effect sizes with identical data. This figure details how the commonly used effect size calculations are highly
dependent on the homogeneity of the sample itself as opposed to the variability of the actual intervention. A) Intervention 1 was used in a more heterogeneous group of
individuals in comparison to intervention 2. B) The intervention produced identical results across individuals in both intervention 1 and intervention 2. C) The identical results
of the intervention are apparent in that the change scores and variability of the change scores are identical. Despite both interventions producing the exact same results, the
commonly used effect size calculations that rely on the pre-test SDs largely favor intervention 1. The more appropriate effect size calculation (i.e., Cohen’s dz) made relative
to the change score SD appropriately shows that both interventions were equally as effective and support the results of the statistical test. Cohen’s d was calculated as the
difference in means divided by the pooled pre- and post-test SDs; Glass delta was calculated as the difference in means divided by the pre-test SD; Hedges g was
calculated as an unbiased version of Cohen’s d to account for small sample sizes. The r value represents the correlation between the pre-test and post-test values.

the square root of the sample size, the appropriate effect size (Figure 1A shows a pre-test and post-test graph of paired
can also be calculated as the t statistic divided by the square data but the variability of the intervention cannot be calcu-
root of the sample size. We have also included the p-value of lated). This is important to note because, when examining
the paired t-test. With these calculations, it is important to note paired data, the only variability that is relevant for the cal-
that Cohen’s d, Glass delta, and Hedges g are all made relative culation of the test statistic and effect size is the variability
to the variability of the sample population, whereas the test that exists with respect to how individuals responded to the
statistic and appropriate effect size calculations are made rela- intervention. This can easily be assessed by examining the
tive to the variability of the intervention. change score (i.e., post-test minus pre-test) for each individ-
ual (Figure 1B) and also by computing the SD of the change
RESULTS score across all individuals (Figure 1C). In fact, the change
Effect size calculations and statistics are computed for 2 score is the only variable needed for the calculation of the
sample data sets depicted in Figures 1 and 2. Figure 1 illus- test statistic when computing a paired t-test.
trates 2 hypothetical interventions that produce drastically The test statistic (the t statistic) for a paired t-test details
different results despite commonly used effect size measures how many SE units the post-test is from the pre-test. There-
being identical. Figure 2 depicts 2 hypothetical interventions fore, using the same logic used previously, the most appro-
that produce identical results with respect to the interven- priate way to calculate an effect size when examining the
tion itself, but traditional effect size measures suggest large effect of an intervention would be to examine the mean
differences are present. The appropriate effect size, however, difference divided by the SD of the mean difference
provides accurate information in terms of the effectiveness of (i.e., Cohen’s dz). Although there are numerous effect sizes
the intervention. The effect size values would differ if that are available, a paired t-test is solely dependent on Co-
percentage changes were used (Figure 1; intervention 1 = hen’s dz and the sample size. Cohen’s dz details the magni-
1.74; intervention 2 = 0.49). We recommend calculating the tude of the test statistic while removing the influence of the
effect size with the same values used for the statistical test. sample size and shows how many change score SD units the
post-test value is from the pre-test value. Again, the pre-test
DISCUSSION SD is not relevant when examining paired data because each
If only the pre-test and post-test SDs are used, it is impossi- individual’s post-test score is only compared with their own
ble to capture the variability of the actual intervention personal pre-test score. This is depicted in Figure 1 where

VOLUME 35 | NUMBER 6 | JUNE 2021 | 1775

Copyright © 2018 National Strength and Conditioning Association. Unauthorized reproduction of this article is prohibited.
Choosing the Correct Effect Size

intervention 1 consistently produces a positive effect across intervention will be effective and will appropriately be
individuals whereas intervention 2 does not and, if the accompanied with a smaller effect size (Figure 1; interven-
appropriate effect size is used, this information can be cap- tion 2).
tured (given the appropriate effect size for intervention 1 is When using within-subjects designs, normalizing effect
11.61 and the effect size for intervention 2 is 0.24). However, size values to the pre-test SD will enable the calculation of
if the inappropriate yet commonly used effect sizes (which a confidence interval before the intervention is even com-
depend on the between-subject variability) are used, this pleted. This again does not make sense because the confi-
information can be entirely misleading, given that both effect dence interval should provide information on how accurate
sizes will be the same. Importantly, the effect size is sup- the estimated effect will be, and this can only be calculated
posed to provide information about the statistical test while once the variability of the intervention is known. This again
removing the influence of the sample size; yet, using the also points to the flaws of normalizing effect sizes to the pre-
commonly used effect sizes within the exercise science liter- test SD because the magnitude of the effect (i.e., the effec-
ature produces the same magnitude of effect despite drasti- tiveness of the intervention) is dependent on the individuals
cally different test statistics (Figure 1: intervention 1: t recruited rather than the actual effectiveness of the
statistic = 36.74 vs. intervention 2: t statistic = 0.78). In this intervention.
example, the sample size is the same for both interventions In the examples provided in the figures, we calculate the
and thus the relationship between the test statistic and the effect sizes individually for 2 different interventions to
appropriate effect size is constant (i.e., the test statistic and illustrate the flaws of using the pre-test SD for calculating
appropriate effect size are approximately 47 times greater in effect sizes when assessing one intervention group
intervention 1 compared with intervention 2). (i.e., treatment group only). If 2 interventions are being
The commonly used measure of the effect size that directly compared on the outcome measure, then research-
compares the pre-test to post-test change divided by the ers should not calculate and compare 2 effect sizes, but
pre-test SD or the pooled pre-test and post-test SD is inap- should rather calculate one effect size that directly compares
propriate and is heavily reliant on the heterogeneity of the both groups (4). This is the same logic as to why a pre-test/
study population to begin with. For example, if a researcher post-test design assessing multiple groups (i.e., experimental
is examining the efficacy of a weight-loss program using the vs. control) should not be analyzed by simply running 2
incorrect method, the program is likely to be deemed more paired t-tests because there is no direct comparison between
efficacious if the study population is very homogeneous to groups. The outcome variable must be in the same units for
start off with (Figure 2: intervention 2). Conversely, with the both interventions to be compared because the effect size is
incorrect method, the program will be more likely to be computed as the difference in change scores divided by the
deemed ineffective if a more heterogeneous sample of indi- pooled SDs of the change scores. Therefore, this effect size
viduals is recruited (Figure 2: intervention 1). In Figure 2, the compares the change and variability from one intervention
results of the intervention are identical in 2 different groups to the change and variability in another intervention. In all
of individuals with the only difference being that interven- the figures provided, this effect size calculation would be “0”
tion 1 was used in a more heterogeneous cohort at baseline. because the difference in change scores (i.e., the numerator
This details that the effect sizes that are commonly used in the calculation of the effect size) is 0. This calculation
within the exercise science literature are dependent on the details the magnitude of the effect (the difference in change
variability in the sample which is being studied, which is scores) relative to the variability that occurred within each
irrelevant in determining whether the results of the interven- group. As such, a larger mean difference between groups or
tion (i.e., the changes from baseline) are consistently a smaller variability (i.e., more consistent results within each
observed. This is detailed in that the test statistics, p-values, group) will produce a larger effect. Conceptually, a large
and appropriate effect sizes are identical for both groups effect size here would detail that one intervention consis-
(rightfully so, given the intervention results are identical), tently produced greater results than the other intervention.
yet the commonly used effect sizes differ drastically from This effect size calculation also deals with the variability in
one intervention to the next (intervention 1: 0.24 vs. inter- response to the intervention (i.e., the pooled SDs of the
vention 2: 0.86). The more appropriate within-group effect change scores), and the only difference lies in directly com-
size is made relative to the variability of the intervention paring the 2 groups as opposed to looking at each group
itself as opposed to the pre-test variability and is therefore individually.
not affected by the heterogeneity of the sample recruited. It should be mentioned that this information is not novel
This makes sense because if the intervention produces con- and in other fields, this is often computed in an appropriate
sistent results across individuals, this truly provides support manner. Authors will use the appropriate effect size relative
that the intervention is likely to have a positive effect and to the change score SD and, as previously mentioned, this
thus corresponds to a higher effect size (Figure 1; interven- details the pre-test to post-test change divided by the SD of
tion 1). However, if the intervention does not produce con- the pre-test to post-test change. This again provides the
sistent results, this does not support the idea that the same information as the test statistic but removes the
the TM

1776 Journal of Strength and Conditioning Research

Copyright © 2018 National Strength and Conditioning Association. Unauthorized reproduction of this article is prohibited.
the TM

Journal of Strength and Conditioning Research | www.nsca.com

influence of the sample size. Thus, a larger effect size can be pooling the pre-test and post-test SDs (11). This will likely
obtained by one of 2 ways: (a) a larger mean change from underestimate the magnitude of the effect because within the
pre to post; or (b) a smaller magnitude of variability with exercise science literature, correlations between pre-test and
respect to how people responded to the intervention. For post-test values are often very high, given the large degree of
meta-analyses that are examining effects of paired data (such variability in the baseline (pre-test) values of the variables
as pre-test and post-test designs), the Cohen’s dz effect size being measured. Therefore, the Cohen’s dz statistic will
will offer a more appropriate analysis of the actual interven- likely produce a higher effect size value as compared to using
tion used. If this effect size is not provided, it can be calcu- a between-subjects effect size (based on pre-test and post-
lated off the t statistic by using the following formula: test SDs), and this is particularly evident when high pre-test
to post-test correlations are present (Figure 1; intervention
calculated t statistic
Cohen’s dz ¼ pffiffiffi ; 1). If weaker, or even negative correlations are present
n (,0.5), then the Cohen’s dz value will be lower than the
between-subjects effect size calculation (Figure 1; interven-
where n represents the sample size. If the t statistic is not tion 2). Within-subject effect sizes will usually be larger and
provided, but a specific p value is provided (i.e., a nonspecific this should not be seen as an overestimation of the effect size
p value of ,0.001 for example cannot be used), the inverse of but rather that different thresholds may be necessary when
the cumulative distribution function can be used to obtain comparing within- and between-group effect sizes. This is
the t value. This can be done using the quantile function in also why within-subject designs are usually more statistically
the R statistical package (R Foundation for Statistical powerful than between-subject designs.
Computing, Vienna, Austria) with the following syntax:
Although not the focus of this article, the inappropriate
. qtðp; df ; lower ¼ FALSEÞ; calculation for effect sizes (mean difference/pre-test SD) is
also used for sample size calculations after the effect size is
where p is the 1-tailed p value for the paired t-test, and df obtained from a similar study. This is problematic and neg-
is the degrees of freedom. Because most studies will use ates the primary benefit of using within-subject (e.g., pre-test
a 2-tailed test, the p value must be divided by 2 to obtain and post-test) designs, which improves statistical power by
the 1-tailed analogue. Using the example in Figure 1 for eliminating between-subject variability and focusing only on
intervention 2 where the p value is 0.453 and there are 9 within-subject variability. This again stems back to the idea
degrees of freedom, the R code would look as follows: that the test statistic is calculated from the SE of the change
. qtð0:2265; 9; lower ¼ FALSEÞ; score and thus, the estimated sample size must be obtained
from the SD of the change score (5). This allows for the
keeping in mind that 0.2265 was entered to convert the 2- necessary sample size to be calculated to determine the
tailed test to a 1-tailed test (0.453/2). The obtained t statistic number of individuals necessary to observe statistical signif-
can then be used in the formula provided previously. icance for a given effect size. In other words, given a set alpha
If neither the t statistic nor the SD of the intervention itself level, power, and an estimated effect in SD units, how large
(i.e., the change score SD) is provided within the individual of a sample size is necessary to convert the SD units into SE
study, it can be calculated by estimating the correlation units (i.e., the t statistic) to exceed the critical value. There-
between the pre-test and post-test values and then plugging fore, using the pre-test SD for the effect size estimate does
in the correlation coefficient (r) into the following not actually calculate the necessary sample size to reach
formula (9): statistical significance.

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
SDchange score ¼ SDpretest2 þ SDposttest2 2ð23r 3SDpretest3SDposttestÞ:

The correlation coefficient can be estimated by obtaining In conclusion, the effect size that is often computed for
a pre-post correlation from a similar study, or can be within-subject (pre-test to post-test) designs in the exercise
calculated directly from a similar study where each of the science literature is inappropriate and defeats the entire
pre-test, post-test, and change score SDs are provided. Re- purpose of the effect size. The effect size is intended to detail
searchers can also estimate what the expected correlation the magnitude of the effect while removing the influence of
coefficient would be for a particular variable, given prior the sample size and allows for one unit of measure of which
knowledge. Some researchers choose to use a correlation all studies can be compared. The pre-test and post-test SDs
coefficient of 0.5 when this information is not provided (7); have no influence on the magnitude of variability within the
however, this strategy will result in the same effect size as intervention itself nor do they contribute to the calculation

VOLUME 35 | NUMBER 6 | JUNE 2021 | 1777

Copyright © 2018 National Strength and Conditioning Association. Unauthorized reproduction of this article is prohibited.
Choosing the Correct Effect Size

of the appropriate test static. As such, future studies should 5. Dupont, WD, Plummer, WD. Power and sample size calculations. A
not detail the effects of an intervention in pre-test SD units, review and computer program. Control Clin Trials 11: 116–128, 1990.
but should rather detail the effect in change score SD units. 6. Fathi, A, Hammami, R, Moran, J, et al. Effect of a 16 week combined
strength and plyometric training program followed by a detraining
The appropriate effect size (Cohen’s dz) can be calculated as period on athletic performance in pubertal volleyball players.
the mean change divided by the SD of the change score J Strength Cond Res 33: 2117–2127, 2019.
(i.e., the pre-test to post-test difference). When this value is 7. Fu, R, Holmer, HK. Change Score or Followup Score? An Empirical
not presented in original articles, it can be computed by Evaluation of the Impact of Choice of Mean Difference Estimates.
estimating the pre-test to post-test correlation for the vari- Rockville, MD: Agency for Healthcare Research and Quality (US),
2015.Available at: http://www.ncbi.nlm.nih.gov/books/
able of interest. NBK285983/. Accessed May 3, 2018.
PRACTICAL APPLICATIONS 8. Gonzalo-Skok, O, Tous-Fajardo, J, Moras, G, Arjol-Serrano, JL,
Mendez-Villanueva, A. A repeated power training enhances fatigue
When studies report effect sizes, it is important to check resistance while reducing intraset fluctuations. J Strength Cond Res
how the effect size was calculated because studies using the 33: 2711–2721, 2019.
pre-test SD are likely to result in lower effect size values than 9. Higgins, JPT, Green, S, eds. Cochrane Handbook for Systematic Reviews
of Interventions Version 5.1.0 [updated March 2011]: The Cochrane
those using the change score SD. When computing effect
Collaboration, 2011. Available at: http://handbook.cochrane.org.
sizes using paired data, the change score SD provides infor- Accessed May 3, 2018.
mation on how variable the response was to the interven- 10. Lakens, D. Calculating and reporting effect sizes to facilitate
tion, but the pre-test SD does not. cumulative science: A practical primer for t-tests and ANOVAs.
Front Psychol 4: 863, 2013.
ACKNOWLEDGMENTS 11. Morris, SB, DeShon, RP. Combining effect size estimates in meta-
The authors have no conflicts of interest to disclose. analysis with repeated measures and independent-groups designs.
Psychol Methods 7: 105–125, 2002.
REFERENCES 12. Nagata, A, Doma, K, Yamashita, D, Hasegawa, H, Mori, S. The
effect of augmented feedback type and frequency on velocity-based
1. Brigatto, FA, Braz, TV, Zanini, TCDC, et al. Effect of resistance
training-induced adaptation and retention. J Strength Cond Res 34:
training frequency on neuromuscular performance and muscle
3110–3117, 2020.
morphology after eight weeks in trained men. J Strength Cond Res
33: 2104–2116, 2019. 13. Pérez-Castilla, A, Comfort, P, McMahon, JJ, Pestaña-Melero, FL,
2. Carballeira, E, Morales, J, Fukuda, DH, et al. Intermittent cooling Garcı́a-Ramos, A. Comparison of the force-, velocity- and power-
during judo training in a warm/humid environment reduces time curves between the concentric-only and eccentric-concentric
autonomic and hormonal impact. J Strength Cond Res 33: 2241– bench press exercises. J Strength Cond Res 34: 1618–1624, 2020.
2250, 2019. 14. Perrotta, AS, Taunton, JE, Koehle, MS, White, MD, Warburton,
3. Coratella, G, Beato, M, Milanese, C, et al. Specific adaptations in DER. Monitoring the prescribed and experienced heart rate derived
performance and muscle architecture after weighted jump-squat vs. training loads in elite field hockey players. J Strength Cond Res 33:
body mass squat jump training in recreational soccer players. 1394–1399, 2019.
J Strength Cond Res 32: 921–929, 2018. 15. Ruf, L, Chéry, C, Taylor, KL. Validity and reliability of the load-
4. Dankel, SJ, Mouser, JG, Mattocks, KT, et al. The widespread misuse velocity relationship to predict the one-repetition maximum in
of effect sizes. J Sci Med Sport 20: 446–450, 2017. deadlift. J Strength Cond Res 32: 681–689, 2018.

the TM

1778 Journal of Strength and Conditioning Research

Copyright © 2018 National Strength and Conditioning Association. Unauthorized reproduction of this article is prohibited.

You might also like