Jump to content

Repeated measures design

From Wikipedia, the free encyclopedia

Repeated measures design is a research design that involves multiple measures of the same variable taken on the same or matched subjects either under different conditions or over two or more time periods.[1] For instance, repeated measurements are collected in a longitudinal study in which change over time is assessed.

Crossover studies

[edit]

A popular repeated-measures design is the crossover study. A crossover study is a longitudinal study in which subjects receive a sequence of different treatments (or exposures). While crossover studies can be observational studies, many important crossover studies are controlled experiments. Crossover designs are common for experiments in many scientific disciplines, for example psychology, education, pharmaceutical science, and health care, especially medicine.

Randomized, controlled, crossover experiments are especially important in health care. In a randomized clinical trial, the subjects are randomly assigned treatments. When such a trial is a repeated measures design, the subjects are randomly assigned to a sequence of treatments. A crossover clinical trial is a repeated-measures design in which each patient is randomly assigned to a sequence of treatments, including at least two treatments (of which one may be a standard treatment or a placebo): Thus each patient crosses over from one treatment to another.

Nearly all crossover designs have "balance", which means that all subjects should receive the same number of treatments and that all subjects participate for the same number of periods. In most crossover trials, each subject receives all treatments.

However, many repeated-measures designs are not crossovers: the longitudinal study of the sequential effects of repeated treatments need not use any "crossover", for example (Vonesh & Chinchilli; Jones & Kenward).

Uses

[edit]
  • Limited number of participants—The repeated measure design reduces the variance of estimates of treatment-effects, allowing statistical inference to be made with fewer subjects.[2]
  • Efficiency—Repeated measure designs allow many experiments to be completed more quickly, as fewer groups need to be trained to complete an entire experiment. For example, experiments in which each condition takes only a few minutes, whereas the training to complete the tasks take as much, if not more time.
  • Longitudinal analysis—Repeated measure designs allow researchers to monitor how participants change over time, both long- and short-term situations.

Order effects

[edit]

Order effects may occur when a participant in an experiment is able to perform a task and then perform it again. Examples of order effects include performance improvement or decline in performance, which may be due to learning effects, boredom or fatigue. The impact of order effects may be smaller in long-term longitudinal studies or by counterbalancing using a crossover design.

Counterbalancing

[edit]

In this technique, two groups each perform the same tasks or experience the same conditions, but in reverse order. With two tasks or conditions, four groups are formed.

Counterbalancing
Task/Condition Task/Condition Remarks
Group A
1
2
Group A performs Task/Condition 1 first, then Task/Condition 2
Group B
2
1
Group B performs Task/Condition 2 first, then Task/Condition 1

Counterbalancing attempts to take account of two important sources of systematic variation in this type of design: practice and boredom effects. Both might otherwise lead to different performance of participants due to familiarity with or tiredness to the treatments.

Limitations

[edit]

It may not be possible for each participant to be in all conditions of the experiment (i.e. time constraints, location of experiment, etc.). Severely diseased subjects tend to drop out of longitudinal studies, potentially biasing the results. In these cases mixed effects models would be preferable as they can deal with missing values.

Mean regression may affect conditions with significant repetitions. Maturation may affect studies that extend over time. Events outside the experiment may change the response between repetitions.

Repeated measures ANOVA

[edit]
This figure is an example of a repeated measures design that could be analyzed using a rANOVA (repeated measures ANOVA). The independent variable is the time (Levels: Time 1, Time 2, Time 3, Time 4) that someone took the measure, and the dependent variable is the happiness measure score. Example participant happiness scores are provided for 3 participants for each time or level of the independent variable.

Repeated measures analysis of variance (rANOVA) is a commonly used statistical approach to repeated measure designs.[3] With such designs, the repeated-measure factor (the qualitative independent variable) is the within-subjects factor, while the dependent quantitative variable on which each participant is measured is the dependent variable.

Partitioning of error

[edit]

One of the greatest advantages to rANOVA, as is the case with repeated measures designs in general, is the ability to partition out variability due to individual differences. Consider the general structure of the F-statistic:

F = MSTreatment / MSError = (SSTreatment/dfTreatment)/(SSError/dfError)

In a between-subjects design there is an element of variance due to individual difference that is combined with the treatment and error terms:

SSTotal = SSTreatment + SSError
dfTotal = n − 1

In a repeated measures design it is possible to partition subject variability from the treatment and error terms. In such a case, variability can be broken down into between-treatments variability (or within-subjects effects, excluding individual differences) and within-treatments variability. The within-treatments variability can be further partitioned into between-subjects variability (individual differences) and error (excluding the individual differences):[4]

SSTotal = SSTreatment (excluding individual difference) + SSSubjects + SSError
dfTotal = dfTreatment (within subjects) + dfbetween subjects + dferror = (k − 1) + (s − 1) + ((k - 1)(s − 1)) = ks -1= n-1, where k is the number of time levels and s is the number of subjects.

In reference to the general structure of the F-statistic, it is clear that by partitioning out the between-subjects variability, the F-value will increase because the sum of squares error term will be smaller resulting in a smaller MSError. It is noteworthy that partitioning variability reduces degrees of freedom from the F-test, therefore the between-subjects variability must be significant enough to offset the loss in degrees of freedom. If between-subjects variability is small this process may actually reduce the F-value.[4]

Assumptions

[edit]

As with all statistical analyses, specific assumptions should be met to justify the use of this test. Violations can moderately to severely affect results and often lead to an inflation of type 1 error. With the rANOVA, standard univariate and multivariate assumptions apply.[5] The univariate assumptions are:

  • Normality—For each level of the within-subjects factor, the dependent variable must have a normal distribution.
  • Sphericity—Difference scores computed between two levels of a within-subjects factor must have the same variance for the comparison of any two levels. (This assumption only applies if there are more than 2 levels of the independent variable.)
  • Randomness—Cases should be derived from a random sample, and scores from different participants should be independent of each other.

The rANOVA also requires that certain multivariate assumptions be met, because a multivariate test is conducted on difference scores. These assumptions include:

  • Multivariate normality—The difference scores are multivariately normally distributed in the population.
  • Randomness—Individual cases should be derived from a random sample, and the difference scores for each participant are independent from those of another participant.

F test

[edit]

As with other analysis of variance tests, the rANOVA makes use of an F statistic to determine significance. Depending on the number of within-subjects factors and assumption violations, it is necessary to select the most appropriate of three tests:[5]

  • Standard Univariate ANOVA F test—This test is commonly used given only two levels of the within-subjects factor (i.e. time point 1 and time point 2). This test is not recommended given more than 2 levels of the within-subjects factor because the assumption of sphericity is commonly violated in such cases.
  • Alternative Univariate test[6]—These tests account for violations to the assumption of sphericity, and can be used when the within-subjects factor exceeds 2 levels. The F statistic is the same as in the Standard Univariate ANOVA F test, but is associated with a more accurate p-value. This correction is done by adjusting the degrees of freedom downward for determining the critical F value. Two corrections are commonly used: the Greenhouse–Geisser correction and the Huynh–Feldt correction. The Greenhouse–Geisser correction is more conservative, but addresses a common issue of increasing variability over time in a repeated-measures design.[7] The Huynh–Feldt correction is less conservative, but does not address issues of increasing variability. It has been suggested that lower Huynh–Feldt be used with smaller departures from sphericity, while Greenhouse–Geisser be used when the departures are large.
  • Multivariate Test—This test does not assume sphericity, but is also highly conservative.

Effect size

[edit]

One of the most commonly reported effect size statistics for rANOVA is partial eta-squared (ηp2). It is also common to use the multivariate η2 when the assumption of sphericity has been violated, and the multivariate test statistic is reported. A third effect size statistic that is reported is the generalized η2, which is comparable to ηp2 in a one-way repeated measures ANOVA. It has been shown to be a better estimate of effect size with other within-subjects tests.[8][9]

Cautions

[edit]

rANOVA is not always the best statistical analysis for repeated measure designs. The rANOVA is vulnerable to effects from missing values, imputation, unequivalent time points between subjects and violations of sphericity.[3] These issues can result in sampling bias and inflated rates of Type I error.[10] In such cases it may be better to consider use of a linear mixed model.[11]

See also

[edit]

Notes

[edit]
  1. ^ Kraska; Marie (2010), "Repeated Measures Design", Encyclopedia of Research Design, California, USA: SAGE Publications, Inc., doi:10.4135/9781412961288.n378, ISBN 978-1-4129-6127-1, S2CID 149337088
  2. ^ Barret, Julia R. (2013). "Particulate Matter and Cardiovascular Disease: Researchers Turn an Eye toward Microvascular Changes". Environmental Health Perspectives. 121 (9): a282. doi:10.1289/ehp.121-A282. PMC 3764084. PMID 24004855.
  3. ^ a b Gueorguieva; Krystal (2004). "Move Over ANOVA". Arch Gen Psychiatry. 61 (3): 310–7. doi:10.1001/archpsyc.61.3.310. PMID 14993119.
  4. ^ a b Howell, David C. (2010). Statistical methods for psychology (7th ed.). Belmont, CA: Thomson Wadsworth. ISBN 978-0-495-59784-1.
  5. ^ a b Salkind, Samuel B. Green, Neil J. (2011). Using SPSS for Windows and Macintosh : analyzing and understanding data (6th ed.). Boston: Prentice Hall. ISBN 978-0-205-02040-9.{{cite book}}: CS1 maint: multiple names: authors list (link)
  6. ^ Vasey; Thayer (1987). "The Continuing Problem of False Positives in Repeated Measures ANOVA in Psychophysiology: A Multivariate Solution". Psychophysiology. 24 (4): 479–486. doi:10.1111/j.1469-8986.1987.tb00324.x. PMID 3615759.
  7. ^ Park (1993). "A comparison of the generalized estimating equation approach with the maximum likelihood approach for repeated measurements". Stat Med. 12 (18): 1723–1732. doi:10.1002/sim.4780121807. PMID 8248664.
  8. ^ Bakeman (2005). "Recommended effect size statistics for repeated measures designs". Behavior Research Methods. 37 (3): 379–384. doi:10.3758/bf03192707. PMID 16405133.
  9. ^ Olejnik; Algina (2003). "Generalized eta and omega squared statistics: Measures of effect size for some common research designs". Psychological Methods. 8 (4): 434–447. doi:10.1037/1082-989x.8.4.434. PMID 14664681. S2CID 6931663.
  10. ^ Muller; Barton (1989). "Approximate Power for Repeated-Measures ANOVA lacking sphericity". Journal of the American Statistical Association. 84 (406): 549–555. doi:10.1080/01621459.1989.10478802.
  11. ^ Kreuger; Tian (2004). "A comparison of the general linear mixed model and repeated measures ANOVA using a dataset with multiple missing data points". Biological Research for Nursing. 6 (2): 151–157. doi:10.1177/1099800404267682. PMID 15388912. S2CID 23173349.

References

[edit]

Design and analysis of experiments

[edit]
  • Jones, Byron; Kenward, Michael G. (2003). Design and Analysis of Cross-Over Trials (Second ed.). London: Chapman and Hall.
  • Vonesh, Edward F. & Chinchilli, Vernon G. (1997). Linear and Nonlinear Models for the Analysis of Repeated Measurements. London: Chapman and Hall.

Exploration of longitudinal data

[edit]
  • Davidian, Marie; David M. Giltinan (1995). Nonlinear Models for Repeated Measurement Data. Chapman & Hall/CRC Monographs on Statistics & Applied Probability. ISBN 978-0-412-98341-2.
  • Fitzmaurice, Garrett; Davidian, Marie; Verbeke, Geert; Molenberghs, Geert, eds. (2008). Longitudinal Data Analysis. Boca Raton, Florida: Chapman and Hall/CRC. ISBN 978-1-58488-658-7.
  • Jones, Byron; Kenward, Michael G. (2003). Design and Analysis of Cross-Over Trials (Second ed.). London: Chapman and Hall.
  • Kim, Kevin & Timm, Neil (2007). ""Restricted MGLM and growth curve model" (Chapter 7)". Univariate and multivariate general linear models: Theory and applications with SAS (with 1 CD-ROM for Windows and UNIX). Statistics: Textbooks and Monographs (Second ed.). Boca Raton, Florida: Chapman & Hall/CRC. ISBN 978-1-58488-634-1.
  • Kollo, Tõnu & von Rosen, Dietrich (2005). ""Multivariate linear models" (chapter 4), especially "The Growth curve model and extensions" (Chapter 4.1)". Advanced multivariate statistics with matrices. Mathematics and its applications. Vol. 579. New York: Springer. ISBN 978-1-4020-3418-3.
  • Kshirsagar, Anant M. & Smith, William Boyce (1995). Growth curves. Statistics: Textbooks and Monographs. Vol. 145. New York: Marcel Dekker, Inc. ISBN 0-8247-9341-2.
  • Pan, Jian-Xin & Fang, Kai-Tai (2002). Growth curve models and statistical diagnostics. Springer Series in Statistics. New York: Springer-Verlag. ISBN 0-387-95053-2.
  • Seber, G. A. F. & Wild, C. J. (1989). ""Growth models (Chapter 7)"". Nonlinear regression. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. New York: John Wiley & Sons, Inc. pp. 325–367. ISBN 0-471-61760-1.
  • Timm, Neil H. (2002). ""The general MANOVA model (GMANOVA)" (Chapter 3.6.d)". Applied multivariate analysis. Springer Texts in Statistics. New York: Springer-Verlag. ISBN 0-387-95347-7.
  • Vonesh, Edward F. & Chinchilli, Vernon G. (1997). Linear and Nonlinear Models for the Analysis of Repeated Measurements. London: Chapman and Hall. (Comprehensive treatment of theory and practice)
  • Conaway, M. (1999, October 11). Repeated Measures Design. Retrieved February 18, 2008, from http://biostat.mc.vanderbilt.edu/twiki/pub/Main/ClinStat/repmeas.PDF
  • Minke, A. (1997, January). Conducting Repeated Measures Analyses: Experimental Design Considerations. Retrieved February 18, 2008, from Ericae.net: http://ericae.net/ft/tamu/Rm.htm
  • Shaughnessy, J. J. (2006). Research Methods in Psychology. New York: McGraw-Hill.
[edit]