0% found this document useful (0 votes)
452 views7 pages

Analysis of Variance

Uploaded by

himanshu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
452 views7 pages

Analysis of Variance

Uploaded by

himanshu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Analysis of variance

In statistics, analysis of variance (ANOVA) is a collection of statistical models, and their


associated procedures, in which the observed variance is partitioned into components due to
different sources of variation. In its simplest form ANOVA provides a statistical test of whether
or not the means of several groups are all equal, and therefore generalizes Student's two-sample
t-test to more than two groups. ANOVAs are helpful because they possess a certain advantage
over a two-sample t-test. Doing multiple two-sample t-tests would result in a largely increased
chance of committing a type I error. For this reason, ANOVAs are useful in comparing three or
more means.

Overview

There are three conceptual classes of such models:

1. Fixed-effects models assume that the data came from normal populations which may
differ only in their means. (Model 1)
2. Random effects models assume that the data describe a hierarchy of different populations
whose differences are constrained by the hierarchy. (Model 2)
3. Mixed-effect models describe the situations where both fixed and random effects are
present. (Model 3)

In practice, there are several types of ANOVA depending on the number of treatments and the
way they are applied to the subjects in the experiment are:

 One-way ANOVA is used to test for differences among two or more independent groups.
Typically, however, the one-way ANOVA is used to test for differences among at least
three groups, since the two-group case can be covered by a t-test (Gosset, 1908). When
there are only two means to compare, the t-test and the F-test are equivalent; the relation
between ANOVA and t is given by F = t2.
 Factorial ANOVA is used when the experimenter wants to study the effects of two or
more treatment variables. The most commonly used type of factorial ANOVA is the 22
(read "two by two") design, where there are two independent variables and each variable
has two levels or distinct values. However, such use of ANOVA for analysis of 2k
factorial designs and fractional factorial designs is "confusing and makes little sense";
instead it is suggested to refer the value of the effect divided by its standard error to a t-
table.[1] Factorial ANOVA can also be multi-level such as 33, etc. or higher order such as
2×2×2, etc. but analyses with higher numbers of factors are rarely done by hand because
the calculations are lengthy. However, since the introduction of data analytic software,
the utilization of higher order designs and analyses has become quite common.
 Repeated measures ANOVA is used when the same subjects are used for each treatment
(e.g., in a longitudinal study). Note that such within-subjects designs can be subject to
carry-over effects.
 Mixed-design ANOVA. When one wishes to test two or more independent groups
subjecting the subjects to repeated measures, one may perform a factorial mixed-design
ANOVA, in which one factor is a between-subjects variable and the other is within-
subjects variable. This is a type of mixed-effect model.
 Multivariate analysis of variance (MANOVA) is used when there is more than one
dependent variable.
 PERMANOVA which tests the simultaneous responses of one or more variables to one
or more factors in an ANOVA experimental design on the basis of any distance measure,
using permutation methods.

Models

Fixed-effects models (Model 1)

Main article: Fixed effects model

The fixed-effects model of analysis of variance applies to situations in which the experimenter
applies several treatments to the subjects of the experiment to see if the response variable values
change. This allows the experimenter to estimate the ranges of response variable values that the
treatment would generate in the population as a whole.

Random-effects models (Model 2)

Main article: Random effects model

Random effects models are used when the treatments are not fixed. This occurs when the various
treatments (also known as factor levels) are sampled from a larger population. Because the
treatments themselves are random variables, some assumptions and the method of contrasting the
treatments differ from ANOVA model 1.

Most random-effects or mixed-effects models are not concerned with making inferences
concerning the particular sampled factors. For example, consider a large manufacturing plant in
which many machines produce the same product. The statistician studying this plant would have
very little interest in comparing the three particular machines to each other. Rather, inferences
that can be made for all machines are of interest, such as their variability and the mean.
However, if one is interested in the realized value of the random effect best linear unbiased
prediction can be used to obtain a "prediction" for the value.

Assumptions of ANOVA

There are several approaches to the analysis of variance.

A model often presented in textbooks

Many textbooks present the analysis of variance in terms of a linear model, which makes the
following assumptions:
 Independence of cases – this is an assumption of the model that simplifies the statistical
analysis.
 Normality – the distributions of the residuals are normal.
 Equality (or "homogeneity") of variances, called homoscedasticity — the variance of data
in groups should be the same. Model-based approaches usually assume that the variance
is constant. The constant-variance property also appears in the randomization (design-
based) analysis of randomized experiments, where it is a necessary consequence of the
randomized design and the assumption of unit treatment additivity (Hinkelmann and
Kempthorne): If the responses of a randomized balanced experiment fail to have constant
variance, then the assumption of unit treatment additivity is necessarily violated. It has
been shown, however, that the F-test is robust to violations of this assumption.[2]

Levene's test for homogeneity of variances is typically used to examine the plausibility of
homoscedasticity.

The Kolmogorov–Smirnov or the Shapiro–Wilk test may be used to examine normality.

When used in the analysis of variance to test the hypothesis that all treatments have exactly the
same effect, the F-test is robust (Ferguson & Takane, 2005, pp. 261–2).[3] The Kruskal–Wallis
test is a nonparametric alternative which does not rely on an assumption of normality. And the
Friedman test is the nonparametric alternative for a one way repeated measures ANOVA.

The separate assumptions of the textbook model imply that the errors are independently,
identically, and normally distributed for fixed effects models, that is, that the errors are
independent and

Randomization-based analysis

See also: Random assignment and Randomization test

In a randomized controlled experiment, the treatments are randomly assigned to experimental


units, following the experimental protocol. This randomization is objective and declared before
the experiment is carried out. The objective random-assignment is used to test the significance of
the null hypothesis, following the ideas of C. S. Peirce and Ronald A. Fisher. This design-based
analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental
Station and by Oscar Kempthorne at Iowa State University.[4] Kempthorne and his students make
an assumption of unit treatment additivity, which is discussed in the books of Kempthorne and
David R. Cox.

Unit-treatment additivity

In its simplest form, the assumption of unit-treatment additivity states that the observed response
yi,j from experimental unit i when receiving treatment j can be written as the sum of the unit's
response yi and the treatment-effect tj, that is
yi,j = yi + tj.[5]

The assumption of unit-treatment addivity implies that, for every treatment j, the jth treatment
have exactly the same effect tj on every experiment unit.

The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox
and Kempthorne. However, many consequences of treatment-unit additivity can be falsified. For
a randomized experiment, the assumption of unit-treatment additivity implies that the variance is
constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment
additivity is that the variance is constant.

The property of unit-treatment additivity is not invariant under a "change of scale", so


statisticians often use transformations to achieve unit-treatment additivity. If the response
variable is expected to follow a parametric family of probability distributions, then the
statistician may specify (in the protocol for the experiment or observational study) that the
responses be transformed to stabilize the variance.[6] Also, a statistician may specify that
logarithmic transforms be applied to the responses, which are believed to follow a multiplicative
model.[7]

The assumption of unit-treatment additivity was enunciated in experimental design by


Kempthorne and Cox. Kempthorne's use of unit treatment additivity and randomization is similar
to the design-based inference that is standard in finite-population survey sampling.

Derived linear model

Kempthorne uses the randomization-distribution and the assumption of unit treatment additivity
to produce a derived linear model, very similar to the textbook model discussed previously.

The test statistics of this derived linear model are closely approximated by the test statistics of an
appropriate normal linear model, according to approximation theorems and simulation studies by
Kempthorne and his students (Hinkelmann and Kempthorne). However, there are differences.
For example, the randomization-based analysis results in a small but (strictly) negative
correlation between the observations (Hinkelmann and Kempthorne, volume one, chapter 7;
Bailey chapter 1.14). In the randomization-based analysis, there is no assumption of a normal
distribution and certainly no assumption of independence. On the contrary, the observations are
dependent!

The randomization-based analysis has the disadvantage that its exposition involves tedious
algebra and extensive time. Since the randomization-based analysis is complicated and is closely
approximated by the approach using a normal linear model, most teachers emphasize the normal
linear model approach. Few statisticians object to model-based analysis of balanced randomized
experiments.

Statistical models for observational data


However, when applied to data from non-randomized experiments or observational studies,
model-based analysis lacks the warrant of randomization. For observational data, the derivation
of confidence intervals must use subjective models, as emphasized by Ronald A. Fisher and his
followers. In practice, the estimates of treatment-effects from observational studies generally are
often inconsistent (Freedman). In practice, "statistical models" and observational data are useful
for suggesting hypotheses that should be treated very cautiously by the public (Freedman).

ANOVA on ranks

See also: Kruskal-Wallis one-way analysis of variance

When the data do not meet the assumptions of normality, the suggestion has arisen to replace
each original data value by its rank (from 1 for the smallest to N for the largest), then run a
standard ANOVA calculation on the rank-transformed data. Conover and Iman (1981) provided
a review of the four main types of rank transformations. Commercial statistical software
packages (e.g., SAS, 1985, 1987, 2008) followed with recommendations to data analysts to run
their data sets through a ranking procedure (e.g., PROC RANK) prior to conducting standard
analyses using parametric procedures.

This rank-based procedure has been recommended as being robust to non-normal errors, resistant
to outliers, and highly efficient for many distributions. It may result in a known statistic (e.g.,
Wilcoxon Rank-Sum / Mann-Whitney U), and indeed provide the desired robustness and
increased statistical power that is sought. For example, Monte Carlo studies have shown that the
rank transformation in the two independent samples t test layout can be successfully extended to
the one-way independent samples ANOVA, as well as the two independent samples multivariate
Hotelling's T2 layouts (Nanna, 2002).

Conducting factorial ANOVA on the ranks of original scores has also been suggested (Conover
& Iman, 1976, Iman, 1974, and Iman & Conover, 1976). However, Monte Carlo studies by
Sawilowsky (1985a; 1989 et al.; 1990) and Blair, Sawilowsky, and Higgins (1987), and
subsequent asymptotic studies (e.g. Thompson & Ammann, 1989; "there exist values for the
main effects such that, under the null hypothesis of no interaction, the expected value of the rank
transform test statistic goes to infinity as the sample size increases," Thompson, 1991, p. 697),
found that the rank transformation is inappropriate for testing interaction effects in a 4x3 and a
2x2x2 factorial design. As the number of effects (i.e., main, interaction) become non-null, and as
the magnitude of the non-null effects increase, there is an increase in Type I error, resulting in a
complete failure of the statistic with as high as a 100% probability of making a false positive
decision. Similarly, Blair and Higgins (1985) found that the rank transformation increasingly
fails in the two dependent samples layout as the correlation between pretest and posttest scores
increase. Headrick (1997) discovered the Type I error rate problem was exacerbated in the
context of Analysis of Covariance, particularly as the correlation between the covariate and the
dependent variable increased. For a review of the properties of the rank transformation in
designed experiments see Sawilowsky (2000).

A variant of rank-transformation is 'quantile normalization' in which a further transformation is


applied to the ranks such that the resulting values have some defined distribution (often a normal
distribution with a specified mean and variance). Further analyses of quantile-normalized data
may then assume that distribution to compute significance values. However, two specific types
of secondary transformations, the random normal scores and expected normal scores
transformation, have been shown to greatly inflate Type I errors and severely reduce statistical
power (Sawilowsky, 1985a, 1985b).

According to Hettmansperger and McKean[8] "Sawilowsky (1990)[9] provides an excellent review


of nonparametric approaches to testing for interaction" in ANOVA.

Follow up tests

A statistically significant effect in ANOVA is often followed up with one or more different
follow-up tests. This can be done in order to assess which groups are different from which other
groups or to test various other focused hypotheses. Follow up tests are often distinguished in
terms of whether they are planned (a priori) or post hoc. Planned tests are determined before
looking at the data and post hoc tests are performed after looking at the data. Post hoc tests such
as Tukey's range test most commonly compare every group mean with every other group mean
and typically incorporate some method of controlling for Type I errors. Comparisons, which are
most commonly planned, can be either simple or compound. Simple comparisons compare one
group mean with one other group mean. Compound comparisons typically compare two sets of
groups means where one set has at two or more groups (e.g., compare average group means of
group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and
quadratic relationships, when the independent variable involves ordered levels.

Power analysis

Power analysis is often applied in the context of ANOVA in order to assess the probability of
successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in
the population, sample size and alpha level. Power analysis can assist in study design by
determining what sample size would be required in order to have a reasonable chance of
rejecting the null hypothesis when the alternative hypothesis is true.

Examples

In a first experiment, Group A is given vodka, Group B is given gin, and Group C is given a
placebo. All groups are then tested with a memory task. A one-way ANOVA can be used to
assess the effect of the various treatments (that is, the vodka, gin, and placebo).

In a second experiment, Group A is given vodka and tested on a memory task. The same group is
allowed a rest period of five days and then the experiment is repeated with gin. The procedure is
repeated using a placebo. A one-way ANOVA with repeated measures can be used to assess
the effect of the vodka versus the impact of the placebo.

In a third experiment testing the effects of expectations, subjects are randomly assigned to four
groups:
1. expect vodka—receive vodka
2. expect vodka—receive placebo
3. expect placebo—receive vodka
4. expect placebo—receive placebo (the last group is used as the control group)

Each group is then tested on a memory task. The advantage of this design is that multiple
variables can be tested at the same time instead of running two different experiments. Also, the
experiment can determine whether one variable affects the other variable (known as interaction
effects). A factorial ANOVA (2×2) can be used to assess the effect of expecting vodka or the
placebo and the actual reception of either.

History

The analysis of variance was used informally by researchers in the 1800s using least squares. In
physics and psychology, researchers included a term for the operator-effect, the influence of a
particular person on measurements, according to Stephen Stigler's histories.

In its modern form, the analysis of variance was one of the many important statistical
innovations of Ronald A. Fisher. Fisher proposed a formal analysis of variance in his 1918 paper
The Correlation Between Relatives on the Supposition of Mendelian Inheritance[11]. His first
application of the analysis of variance was published in 1921[12]. Analysis of variance became
widely known after being included in Fisher's 1925 book Statistical Methods for Research
Workers.

You might also like